00:00:00.002 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 595 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3261 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.076 The recommended git tool is: git 00:00:00.076 using credential 00000000-0000-0000-0000-000000000002 00:00:00.078 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.105 Fetching changes from the remote Git repository 00:00:00.108 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.145 Using shallow fetch with depth 1 00:00:00.145 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.145 > git --version # timeout=10 00:00:00.171 > git --version # 'git version 2.39.2' 00:00:00.171 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.195 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.195 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.389 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.401 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.413 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:04.413 > git config core.sparsecheckout # timeout=10 00:00:04.423 > git read-tree -mu HEAD # timeout=10 00:00:04.439 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:04.458 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:04.458 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:04.541 [Pipeline] Start of Pipeline 00:00:04.553 [Pipeline] library 00:00:04.554 Loading library shm_lib@master 00:00:04.554 Library shm_lib@master is cached. Copying from home. 00:00:04.568 [Pipeline] node 00:00:04.577 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.579 [Pipeline] { 00:00:04.589 [Pipeline] catchError 00:00:04.590 [Pipeline] { 00:00:04.600 [Pipeline] wrap 00:00:04.607 [Pipeline] { 00:00:04.613 [Pipeline] stage 00:00:04.615 [Pipeline] { (Prologue) 00:00:04.630 [Pipeline] echo 00:00:04.631 Node: VM-host-SM9 00:00:04.635 [Pipeline] cleanWs 00:00:04.643 [WS-CLEANUP] Deleting project workspace... 00:00:04.643 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.649 [WS-CLEANUP] done 00:00:04.806 [Pipeline] setCustomBuildProperty 00:00:04.886 [Pipeline] httpRequest 00:00:04.915 [Pipeline] echo 00:00:04.917 Sorcerer 10.211.164.101 is alive 00:00:04.925 [Pipeline] httpRequest 00:00:04.929 HttpMethod: GET 00:00:04.930 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.930 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.942 Response Code: HTTP/1.1 200 OK 00:00:04.943 Success: Status code 200 is in the accepted range: 200,404 00:00:04.943 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:10.927 [Pipeline] sh 00:00:11.211 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:11.229 [Pipeline] httpRequest 00:00:11.257 [Pipeline] echo 00:00:11.259 Sorcerer 10.211.164.101 is alive 00:00:11.269 [Pipeline] httpRequest 00:00:11.274 HttpMethod: GET 00:00:11.274 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:11.275 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:11.290 Response Code: HTTP/1.1 200 OK 00:00:11.290 Success: Status code 200 is in the accepted range: 200,404 00:00:11.291 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:15.360 [Pipeline] sh 00:01:15.639 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:18.184 [Pipeline] sh 00:01:18.466 + git -C spdk log --oneline -n5 00:01:18.466 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:18.466 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:18.466 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:18.466 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:18.466 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:18.488 [Pipeline] withCredentials 00:01:18.499 > git --version # timeout=10 00:01:18.513 > git --version # 'git version 2.39.2' 00:01:18.530 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:18.533 [Pipeline] { 00:01:18.543 [Pipeline] retry 00:01:18.545 [Pipeline] { 00:01:18.564 [Pipeline] sh 00:01:18.845 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:19.117 [Pipeline] } 00:01:19.139 [Pipeline] // retry 00:01:19.145 [Pipeline] } 00:01:19.165 [Pipeline] // withCredentials 00:01:19.177 [Pipeline] httpRequest 00:01:19.202 [Pipeline] echo 00:01:19.204 Sorcerer 10.211.164.101 is alive 00:01:19.214 [Pipeline] httpRequest 00:01:19.219 HttpMethod: GET 00:01:19.219 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:19.220 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:19.221 Response Code: HTTP/1.1 200 OK 00:01:19.221 Success: Status code 200 is in the accepted range: 200,404 00:01:19.221 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:28.921 [Pipeline] sh 00:01:29.199 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:30.634 [Pipeline] sh 00:01:30.915 + git -C dpdk log --oneline -n5 00:01:30.915 eeb0605f11 version: 23.11.0 00:01:30.915 238778122a doc: update release notes for 23.11 00:01:30.915 46aa6b3cfc doc: fix description of RSS features 00:01:30.915 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:30.915 7e421ae345 devtools: support skipping forbid rule check 00:01:30.938 [Pipeline] writeFile 00:01:30.955 [Pipeline] sh 00:01:31.237 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:31.252 [Pipeline] sh 00:01:31.538 + cat autorun-spdk.conf 00:01:31.538 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.538 SPDK_TEST_NVMF=1 00:01:31.538 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.538 SPDK_TEST_URING=1 00:01:31.538 SPDK_TEST_USDT=1 00:01:31.538 SPDK_RUN_UBSAN=1 00:01:31.538 NET_TYPE=virt 00:01:31.538 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:31.538 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:31.538 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.545 RUN_NIGHTLY=1 00:01:31.547 [Pipeline] } 00:01:31.564 [Pipeline] // stage 00:01:31.582 [Pipeline] stage 00:01:31.584 [Pipeline] { (Run VM) 00:01:31.599 [Pipeline] sh 00:01:31.879 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:31.879 + echo 'Start stage prepare_nvme.sh' 00:01:31.879 Start stage prepare_nvme.sh 00:01:31.879 + [[ -n 2 ]] 00:01:31.879 + disk_prefix=ex2 00:01:31.879 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:31.879 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:31.879 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:31.879 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.879 ++ SPDK_TEST_NVMF=1 00:01:31.879 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.879 ++ SPDK_TEST_URING=1 00:01:31.879 ++ SPDK_TEST_USDT=1 00:01:31.879 ++ SPDK_RUN_UBSAN=1 00:01:31.879 ++ NET_TYPE=virt 00:01:31.879 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:31.879 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:31.879 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.879 ++ RUN_NIGHTLY=1 00:01:31.879 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:31.879 + nvme_files=() 00:01:31.879 + declare -A nvme_files 00:01:31.879 + backend_dir=/var/lib/libvirt/images/backends 00:01:31.879 + nvme_files['nvme.img']=5G 00:01:31.879 + nvme_files['nvme-cmb.img']=5G 00:01:31.879 + nvme_files['nvme-multi0.img']=4G 00:01:31.879 + nvme_files['nvme-multi1.img']=4G 00:01:31.879 + nvme_files['nvme-multi2.img']=4G 00:01:31.879 + nvme_files['nvme-openstack.img']=8G 00:01:31.879 + nvme_files['nvme-zns.img']=5G 00:01:31.879 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:31.879 + (( SPDK_TEST_FTL == 1 )) 00:01:31.879 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:31.879 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:31.879 + for nvme in "${!nvme_files[@]}" 00:01:31.879 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:31.879 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.879 + for nvme in "${!nvme_files[@]}" 00:01:31.879 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:31.879 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.880 + for nvme in "${!nvme_files[@]}" 00:01:31.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:32.138 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:32.138 + for nvme in "${!nvme_files[@]}" 00:01:32.138 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:32.138 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.138 + for nvme in "${!nvme_files[@]}" 00:01:32.138 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:32.138 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.138 + for nvme in "${!nvme_files[@]}" 00:01:32.138 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:32.138 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.138 + for nvme in "${!nvme_files[@]}" 00:01:32.138 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:32.396 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.396 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:32.396 + echo 'End stage prepare_nvme.sh' 00:01:32.396 End stage prepare_nvme.sh 00:01:32.408 [Pipeline] sh 00:01:32.688 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.688 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:01:32.946 00:01:32.946 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:32.946 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:32.946 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:32.946 HELP=0 00:01:32.946 DRY_RUN=0 00:01:32.946 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:32.946 NVME_DISKS_TYPE=nvme,nvme, 00:01:32.946 NVME_AUTO_CREATE=0 00:01:32.946 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:32.946 NVME_CMB=,, 00:01:32.946 NVME_PMR=,, 00:01:32.946 NVME_ZNS=,, 00:01:32.946 NVME_MS=,, 00:01:32.946 NVME_FDP=,, 00:01:32.946 SPDK_VAGRANT_DISTRO=fedora38 00:01:32.946 SPDK_VAGRANT_VMCPU=10 00:01:32.946 SPDK_VAGRANT_VMRAM=12288 00:01:32.946 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.946 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:32.946 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.946 SPDK_OPENSTACK_NETWORK=0 00:01:32.946 VAGRANT_PACKAGE_BOX=0 00:01:32.946 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:32.946 FORCE_DISTRO=true 00:01:32.946 VAGRANT_BOX_VERSION= 00:01:32.946 EXTRA_VAGRANTFILES= 00:01:32.946 NIC_MODEL=e1000 00:01:32.946 00:01:32.946 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:32.946 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:36.236 Bringing machine 'default' up with 'libvirt' provider... 00:01:36.497 ==> default: Creating image (snapshot of base box volume). 00:01:36.497 ==> default: Creating domain with the following settings... 00:01:36.497 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720765576_edeec99698efa53f6d1a 00:01:36.497 ==> default: -- Domain type: kvm 00:01:36.497 ==> default: -- Cpus: 10 00:01:36.497 ==> default: -- Feature: acpi 00:01:36.497 ==> default: -- Feature: apic 00:01:36.497 ==> default: -- Feature: pae 00:01:36.497 ==> default: -- Memory: 12288M 00:01:36.497 ==> default: -- Memory Backing: hugepages: 00:01:36.497 ==> default: -- Management MAC: 00:01:36.497 ==> default: -- Loader: 00:01:36.497 ==> default: -- Nvram: 00:01:36.498 ==> default: -- Base box: spdk/fedora38 00:01:36.498 ==> default: -- Storage pool: default 00:01:36.498 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720765576_edeec99698efa53f6d1a.img (20G) 00:01:36.498 ==> default: -- Volume Cache: default 00:01:36.498 ==> default: -- Kernel: 00:01:36.498 ==> default: -- Initrd: 00:01:36.498 ==> default: -- Graphics Type: vnc 00:01:36.498 ==> default: -- Graphics Port: -1 00:01:36.498 ==> default: -- Graphics IP: 127.0.0.1 00:01:36.498 ==> default: -- Graphics Password: Not defined 00:01:36.498 ==> default: -- Video Type: cirrus 00:01:36.498 ==> default: -- Video VRAM: 9216 00:01:36.498 ==> default: -- Sound Type: 00:01:36.498 ==> default: -- Keymap: en-us 00:01:36.498 ==> default: -- TPM Path: 00:01:36.498 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:36.498 ==> default: -- Command line args: 00:01:36.498 ==> default: -> value=-device, 00:01:36.498 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:36.498 ==> default: -> value=-drive, 00:01:36.498 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:36.498 ==> default: -> value=-device, 00:01:36.498 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.498 ==> default: -> value=-device, 00:01:36.498 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:36.498 ==> default: -> value=-drive, 00:01:36.498 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:36.498 ==> default: -> value=-device, 00:01:36.498 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.498 ==> default: -> value=-drive, 00:01:36.498 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:36.498 ==> default: -> value=-device, 00:01:36.498 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.498 ==> default: -> value=-drive, 00:01:36.498 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:36.498 ==> default: -> value=-device, 00:01:36.498 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.498 ==> default: Creating shared folders metadata... 00:01:36.498 ==> default: Starting domain. 00:01:37.879 ==> default: Waiting for domain to get an IP address... 00:01:55.971 ==> default: Waiting for SSH to become available... 00:01:55.971 ==> default: Configuring and enabling network interfaces... 00:01:59.259 default: SSH address: 192.168.121.103:22 00:01:59.259 default: SSH username: vagrant 00:01:59.259 default: SSH auth method: private key 00:02:01.166 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:07.734 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:14.296 ==> default: Mounting SSHFS shared folder... 00:02:15.232 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:15.232 ==> default: Checking Mount.. 00:02:16.635 ==> default: Folder Successfully Mounted! 00:02:16.635 ==> default: Running provisioner: file... 00:02:17.229 default: ~/.gitconfig => .gitconfig 00:02:17.795 00:02:17.795 SUCCESS! 00:02:17.795 00:02:17.795 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:17.795 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:17.795 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:17.795 00:02:17.804 [Pipeline] } 00:02:17.822 [Pipeline] // stage 00:02:17.831 [Pipeline] dir 00:02:17.832 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:02:17.834 [Pipeline] { 00:02:17.848 [Pipeline] catchError 00:02:17.850 [Pipeline] { 00:02:17.867 [Pipeline] sh 00:02:18.147 + vagrant ssh-config --host vagrant 00:02:18.148 + sed -ne /^Host/,$p 00:02:18.148 + tee ssh_conf 00:02:21.430 Host vagrant 00:02:21.430 HostName 192.168.121.103 00:02:21.430 User vagrant 00:02:21.430 Port 22 00:02:21.430 UserKnownHostsFile /dev/null 00:02:21.430 StrictHostKeyChecking no 00:02:21.430 PasswordAuthentication no 00:02:21.430 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:21.430 IdentitiesOnly yes 00:02:21.430 LogLevel FATAL 00:02:21.430 ForwardAgent yes 00:02:21.430 ForwardX11 yes 00:02:21.430 00:02:21.443 [Pipeline] withEnv 00:02:21.445 [Pipeline] { 00:02:21.458 [Pipeline] sh 00:02:21.732 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:21.732 source /etc/os-release 00:02:21.732 [[ -e /image.version ]] && img=$(< /image.version) 00:02:21.732 # Minimal, systemd-like check. 00:02:21.732 if [[ -e /.dockerenv ]]; then 00:02:21.732 # Clear garbage from the node's name: 00:02:21.732 # agt-er_autotest_547-896 -> autotest_547-896 00:02:21.732 # $HOSTNAME is the actual container id 00:02:21.732 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:21.732 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:21.732 # We can assume this is a mount from a host where container is running, 00:02:21.732 # so fetch its hostname to easily identify the target swarm worker. 00:02:21.732 container="$(< /etc/hostname) ($agent)" 00:02:21.732 else 00:02:21.732 # Fallback 00:02:21.732 container=$agent 00:02:21.732 fi 00:02:21.732 fi 00:02:21.732 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:21.732 00:02:22.001 [Pipeline] } 00:02:22.022 [Pipeline] // withEnv 00:02:22.032 [Pipeline] setCustomBuildProperty 00:02:22.047 [Pipeline] stage 00:02:22.049 [Pipeline] { (Tests) 00:02:22.068 [Pipeline] sh 00:02:22.345 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:22.615 [Pipeline] sh 00:02:22.894 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:23.167 [Pipeline] timeout 00:02:23.167 Timeout set to expire in 30 min 00:02:23.169 [Pipeline] { 00:02:23.183 [Pipeline] sh 00:02:23.468 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:24.032 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:02:24.045 [Pipeline] sh 00:02:24.322 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:24.594 [Pipeline] sh 00:02:24.873 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:25.148 [Pipeline] sh 00:02:25.445 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:25.445 ++ readlink -f spdk_repo 00:02:25.445 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:25.445 + [[ -n /home/vagrant/spdk_repo ]] 00:02:25.445 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:25.445 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:25.445 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:25.445 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:25.445 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:25.445 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:25.445 + cd /home/vagrant/spdk_repo 00:02:25.445 + source /etc/os-release 00:02:25.445 ++ NAME='Fedora Linux' 00:02:25.445 ++ VERSION='38 (Cloud Edition)' 00:02:25.445 ++ ID=fedora 00:02:25.445 ++ VERSION_ID=38 00:02:25.445 ++ VERSION_CODENAME= 00:02:25.445 ++ PLATFORM_ID=platform:f38 00:02:25.445 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:25.445 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:25.445 ++ LOGO=fedora-logo-icon 00:02:25.445 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:25.445 ++ HOME_URL=https://fedoraproject.org/ 00:02:25.445 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:25.445 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:25.445 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:25.445 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:25.445 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:25.445 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:25.445 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:25.445 ++ SUPPORT_END=2024-05-14 00:02:25.445 ++ VARIANT='Cloud Edition' 00:02:25.445 ++ VARIANT_ID=cloud 00:02:25.445 + uname -a 00:02:25.445 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:25.445 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:25.705 Hugepages 00:02:25.705 node hugesize free / total 00:02:25.705 node0 1048576kB 0 / 0 00:02:25.705 node0 2048kB 0 / 0 00:02:25.705 00:02:25.705 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:25.705 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:25.705 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:25.705 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:25.705 + rm -f /tmp/spdk-ld-path 00:02:25.705 + source autorun-spdk.conf 00:02:25.705 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.705 ++ SPDK_TEST_NVMF=1 00:02:25.705 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:25.705 ++ SPDK_TEST_URING=1 00:02:25.705 ++ SPDK_TEST_USDT=1 00:02:25.705 ++ SPDK_RUN_UBSAN=1 00:02:25.705 ++ NET_TYPE=virt 00:02:25.705 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:25.705 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:25.705 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.705 ++ RUN_NIGHTLY=1 00:02:25.705 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:25.705 + [[ -n '' ]] 00:02:25.705 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:25.965 + for M in /var/spdk/build-*-manifest.txt 00:02:25.965 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:25.965 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.965 + for M in /var/spdk/build-*-manifest.txt 00:02:25.965 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:25.965 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.965 ++ uname 00:02:25.965 + [[ Linux == \L\i\n\u\x ]] 00:02:25.965 + sudo dmesg -T 00:02:25.965 + sudo dmesg --clear 00:02:25.965 + dmesg_pid=5859 00:02:25.965 + [[ Fedora Linux == FreeBSD ]] 00:02:25.965 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.965 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.965 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:25.965 + [[ -x /usr/src/fio-static/fio ]] 00:02:25.965 + sudo dmesg -Tw 00:02:25.965 + export FIO_BIN=/usr/src/fio-static/fio 00:02:25.965 + FIO_BIN=/usr/src/fio-static/fio 00:02:25.965 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:25.965 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:25.965 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:25.965 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.965 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.965 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:25.965 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.965 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.965 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:25.965 Test configuration: 00:02:25.965 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.965 SPDK_TEST_NVMF=1 00:02:25.965 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:25.965 SPDK_TEST_URING=1 00:02:25.965 SPDK_TEST_USDT=1 00:02:25.965 SPDK_RUN_UBSAN=1 00:02:25.965 NET_TYPE=virt 00:02:25.965 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:25.965 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:25.965 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.965 RUN_NIGHTLY=1 06:27:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:25.965 06:27:05 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:25.965 06:27:05 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.965 06:27:05 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.965 06:27:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.965 06:27:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.965 06:27:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.965 06:27:05 -- paths/export.sh@5 -- $ export PATH 00:02:25.965 06:27:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.965 06:27:05 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:25.965 06:27:05 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:25.965 06:27:05 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720765625.XXXXXX 00:02:25.965 06:27:05 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720765625.Xli94e 00:02:25.965 06:27:05 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:25.965 06:27:05 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:02:25.965 06:27:05 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:25.965 06:27:05 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:25.965 06:27:05 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:25.966 06:27:05 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:25.966 06:27:05 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:25.966 06:27:05 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:25.966 06:27:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.966 06:27:05 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:25.966 06:27:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:25.966 06:27:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:25.966 06:27:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:25.966 06:27:05 -- spdk/autobuild.sh@16 -- $ date -u 00:02:25.966 Fri Jul 12 06:27:05 AM UTC 2024 00:02:25.966 06:27:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:25.966 LTS-59-g4b94202c6 00:02:25.966 06:27:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:25.966 06:27:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:25.966 06:27:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:25.966 06:27:05 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:25.966 06:27:05 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:25.966 06:27:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.966 ************************************ 00:02:25.966 START TEST ubsan 00:02:25.966 ************************************ 00:02:25.966 using ubsan 00:02:25.966 06:27:05 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:25.966 00:02:25.966 real 0m0.000s 00:02:25.966 user 0m0.000s 00:02:25.966 sys 0m0.000s 00:02:25.966 06:27:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:25.966 06:27:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.966 ************************************ 00:02:25.966 END TEST ubsan 00:02:25.966 ************************************ 00:02:26.225 06:27:05 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:26.225 06:27:05 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:26.225 06:27:05 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:26.225 06:27:05 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:26.225 06:27:05 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:26.225 06:27:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.225 ************************************ 00:02:26.225 START TEST build_native_dpdk 00:02:26.225 ************************************ 00:02:26.225 06:27:05 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:26.225 06:27:05 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:26.225 06:27:05 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:26.225 06:27:05 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:26.225 06:27:05 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:26.225 06:27:05 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:26.225 06:27:05 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:26.225 06:27:05 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:26.225 06:27:05 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:26.225 06:27:05 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:26.225 06:27:05 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:26.225 06:27:05 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:26.225 06:27:05 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:26.225 06:27:05 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:26.225 06:27:05 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:26.225 06:27:05 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:26.225 06:27:05 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:26.225 06:27:05 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:26.225 06:27:05 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:26.225 06:27:05 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:26.225 06:27:05 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:26.225 eeb0605f11 version: 23.11.0 00:02:26.225 238778122a doc: update release notes for 23.11 00:02:26.225 46aa6b3cfc doc: fix description of RSS features 00:02:26.225 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:26.225 7e421ae345 devtools: support skipping forbid rule check 00:02:26.225 06:27:05 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:26.225 06:27:05 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:26.225 06:27:05 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:26.225 06:27:05 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:26.225 06:27:05 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:26.225 06:27:05 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:26.225 06:27:05 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:26.225 06:27:05 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:26.225 06:27:05 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:26.225 06:27:05 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:26.225 06:27:05 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:26.225 06:27:05 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:26.225 06:27:05 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:26.225 06:27:05 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:26.225 06:27:05 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:26.225 06:27:05 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:26.225 06:27:05 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:26.225 06:27:05 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:26.225 06:27:05 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:26.225 06:27:05 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:26.225 06:27:05 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:26.225 06:27:05 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:26.225 06:27:05 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:26.225 06:27:05 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:26.225 06:27:05 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:26.225 06:27:05 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:26.225 06:27:05 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:26.225 06:27:05 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:26.226 06:27:05 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:26.226 06:27:05 -- scripts/common.sh@343 -- $ case "$op" in 00:02:26.226 06:27:05 -- scripts/common.sh@344 -- $ : 1 00:02:26.226 06:27:05 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:26.226 06:27:05 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:26.226 06:27:05 -- scripts/common.sh@364 -- $ decimal 23 00:02:26.226 06:27:05 -- scripts/common.sh@352 -- $ local d=23 00:02:26.226 06:27:05 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:26.226 06:27:05 -- scripts/common.sh@354 -- $ echo 23 00:02:26.226 06:27:05 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:26.226 06:27:05 -- scripts/common.sh@365 -- $ decimal 21 00:02:26.226 06:27:05 -- scripts/common.sh@352 -- $ local d=21 00:02:26.226 06:27:05 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:26.226 06:27:05 -- scripts/common.sh@354 -- $ echo 21 00:02:26.226 06:27:05 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:26.226 06:27:05 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:26.226 06:27:05 -- scripts/common.sh@366 -- $ return 1 00:02:26.226 06:27:05 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:26.226 patching file config/rte_config.h 00:02:26.226 Hunk #1 succeeded at 60 (offset 1 line). 00:02:26.226 06:27:05 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:26.226 06:27:05 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:26.226 06:27:05 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:26.226 06:27:05 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:26.226 06:27:05 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:31.498 The Meson build system 00:02:31.498 Version: 1.3.1 00:02:31.498 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:31.498 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:31.498 Build type: native build 00:02:31.498 Program cat found: YES (/usr/bin/cat) 00:02:31.498 Project name: DPDK 00:02:31.498 Project version: 23.11.0 00:02:31.498 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:31.498 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:31.498 Host machine cpu family: x86_64 00:02:31.498 Host machine cpu: x86_64 00:02:31.498 Message: ## Building in Developer Mode ## 00:02:31.498 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:31.498 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:31.498 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:31.498 Program python3 found: YES (/usr/bin/python3) 00:02:31.498 Program cat found: YES (/usr/bin/cat) 00:02:31.498 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:31.498 Compiler for C supports arguments -march=native: YES 00:02:31.498 Checking for size of "void *" : 8 00:02:31.498 Checking for size of "void *" : 8 (cached) 00:02:31.498 Library m found: YES 00:02:31.498 Library numa found: YES 00:02:31.498 Has header "numaif.h" : YES 00:02:31.498 Library fdt found: NO 00:02:31.498 Library execinfo found: NO 00:02:31.498 Has header "execinfo.h" : YES 00:02:31.498 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:31.498 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:31.498 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:31.498 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:31.498 Run-time dependency openssl found: YES 3.0.9 00:02:31.498 Run-time dependency libpcap found: YES 1.10.4 00:02:31.498 Has header "pcap.h" with dependency libpcap: YES 00:02:31.498 Compiler for C supports arguments -Wcast-qual: YES 00:02:31.498 Compiler for C supports arguments -Wdeprecated: YES 00:02:31.498 Compiler for C supports arguments -Wformat: YES 00:02:31.498 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:31.498 Compiler for C supports arguments -Wformat-security: NO 00:02:31.498 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.498 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:31.498 Compiler for C supports arguments -Wnested-externs: YES 00:02:31.498 Compiler for C supports arguments -Wold-style-definition: YES 00:02:31.498 Compiler for C supports arguments -Wpointer-arith: YES 00:02:31.498 Compiler for C supports arguments -Wsign-compare: YES 00:02:31.498 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:31.498 Compiler for C supports arguments -Wundef: YES 00:02:31.498 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.498 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:31.498 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:31.498 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.498 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:31.498 Program objdump found: YES (/usr/bin/objdump) 00:02:31.498 Compiler for C supports arguments -mavx512f: YES 00:02:31.498 Checking if "AVX512 checking" compiles: YES 00:02:31.498 Fetching value of define "__SSE4_2__" : 1 00:02:31.498 Fetching value of define "__AES__" : 1 00:02:31.498 Fetching value of define "__AVX__" : 1 00:02:31.498 Fetching value of define "__AVX2__" : 1 00:02:31.498 Fetching value of define "__AVX512BW__" : (undefined) 00:02:31.498 Fetching value of define "__AVX512CD__" : (undefined) 00:02:31.498 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:31.498 Fetching value of define "__AVX512F__" : (undefined) 00:02:31.498 Fetching value of define "__AVX512VL__" : (undefined) 00:02:31.498 Fetching value of define "__PCLMUL__" : 1 00:02:31.498 Fetching value of define "__RDRND__" : 1 00:02:31.498 Fetching value of define "__RDSEED__" : 1 00:02:31.498 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:31.498 Fetching value of define "__znver1__" : (undefined) 00:02:31.498 Fetching value of define "__znver2__" : (undefined) 00:02:31.498 Fetching value of define "__znver3__" : (undefined) 00:02:31.498 Fetching value of define "__znver4__" : (undefined) 00:02:31.498 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:31.498 Message: lib/log: Defining dependency "log" 00:02:31.498 Message: lib/kvargs: Defining dependency "kvargs" 00:02:31.498 Message: lib/telemetry: Defining dependency "telemetry" 00:02:31.498 Checking for function "getentropy" : NO 00:02:31.498 Message: lib/eal: Defining dependency "eal" 00:02:31.498 Message: lib/ring: Defining dependency "ring" 00:02:31.498 Message: lib/rcu: Defining dependency "rcu" 00:02:31.498 Message: lib/mempool: Defining dependency "mempool" 00:02:31.498 Message: lib/mbuf: Defining dependency "mbuf" 00:02:31.498 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:31.498 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.498 Compiler for C supports arguments -mpclmul: YES 00:02:31.498 Compiler for C supports arguments -maes: YES 00:02:31.498 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:31.498 Compiler for C supports arguments -mavx512bw: YES 00:02:31.498 Compiler for C supports arguments -mavx512dq: YES 00:02:31.498 Compiler for C supports arguments -mavx512vl: YES 00:02:31.498 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:31.498 Compiler for C supports arguments -mavx2: YES 00:02:31.498 Compiler for C supports arguments -mavx: YES 00:02:31.498 Message: lib/net: Defining dependency "net" 00:02:31.498 Message: lib/meter: Defining dependency "meter" 00:02:31.498 Message: lib/ethdev: Defining dependency "ethdev" 00:02:31.498 Message: lib/pci: Defining dependency "pci" 00:02:31.498 Message: lib/cmdline: Defining dependency "cmdline" 00:02:31.498 Message: lib/metrics: Defining dependency "metrics" 00:02:31.498 Message: lib/hash: Defining dependency "hash" 00:02:31.498 Message: lib/timer: Defining dependency "timer" 00:02:31.498 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.498 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:31.498 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:31.498 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:31.498 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:31.498 Message: lib/acl: Defining dependency "acl" 00:02:31.498 Message: lib/bbdev: Defining dependency "bbdev" 00:02:31.498 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:31.498 Run-time dependency libelf found: YES 0.190 00:02:31.498 Message: lib/bpf: Defining dependency "bpf" 00:02:31.498 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:31.498 Message: lib/compressdev: Defining dependency "compressdev" 00:02:31.498 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:31.498 Message: lib/distributor: Defining dependency "distributor" 00:02:31.498 Message: lib/dmadev: Defining dependency "dmadev" 00:02:31.498 Message: lib/efd: Defining dependency "efd" 00:02:31.498 Message: lib/eventdev: Defining dependency "eventdev" 00:02:31.498 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:31.498 Message: lib/gpudev: Defining dependency "gpudev" 00:02:31.498 Message: lib/gro: Defining dependency "gro" 00:02:31.498 Message: lib/gso: Defining dependency "gso" 00:02:31.498 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:31.498 Message: lib/jobstats: Defining dependency "jobstats" 00:02:31.498 Message: lib/latencystats: Defining dependency "latencystats" 00:02:31.498 Message: lib/lpm: Defining dependency "lpm" 00:02:31.498 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.498 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:31.498 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:31.498 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:31.498 Message: lib/member: Defining dependency "member" 00:02:31.498 Message: lib/pcapng: Defining dependency "pcapng" 00:02:31.498 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:31.498 Message: lib/power: Defining dependency "power" 00:02:31.498 Message: lib/rawdev: Defining dependency "rawdev" 00:02:31.498 Message: lib/regexdev: Defining dependency "regexdev" 00:02:31.498 Message: lib/mldev: Defining dependency "mldev" 00:02:31.498 Message: lib/rib: Defining dependency "rib" 00:02:31.498 Message: lib/reorder: Defining dependency "reorder" 00:02:31.498 Message: lib/sched: Defining dependency "sched" 00:02:31.498 Message: lib/security: Defining dependency "security" 00:02:31.498 Message: lib/stack: Defining dependency "stack" 00:02:31.498 Has header "linux/userfaultfd.h" : YES 00:02:31.498 Has header "linux/vduse.h" : YES 00:02:31.498 Message: lib/vhost: Defining dependency "vhost" 00:02:31.498 Message: lib/ipsec: Defining dependency "ipsec" 00:02:31.498 Message: lib/pdcp: Defining dependency "pdcp" 00:02:31.498 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.498 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:31.498 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:31.498 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:31.498 Message: lib/fib: Defining dependency "fib" 00:02:31.498 Message: lib/port: Defining dependency "port" 00:02:31.499 Message: lib/pdump: Defining dependency "pdump" 00:02:31.499 Message: lib/table: Defining dependency "table" 00:02:31.499 Message: lib/pipeline: Defining dependency "pipeline" 00:02:31.499 Message: lib/graph: Defining dependency "graph" 00:02:31.499 Message: lib/node: Defining dependency "node" 00:02:31.499 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:32.872 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:32.872 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:32.872 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:32.873 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:32.873 Compiler for C supports arguments -Wno-unused-value: YES 00:02:32.873 Compiler for C supports arguments -Wno-format: YES 00:02:32.873 Compiler for C supports arguments -Wno-format-security: YES 00:02:32.873 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:32.873 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:32.873 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:32.873 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:32.873 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.873 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:32.873 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:32.873 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:32.873 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:32.873 Has header "sys/epoll.h" : YES 00:02:32.873 Program doxygen found: YES (/usr/bin/doxygen) 00:02:32.873 Configuring doxy-api-html.conf using configuration 00:02:32.873 Configuring doxy-api-man.conf using configuration 00:02:32.873 Program mandb found: YES (/usr/bin/mandb) 00:02:32.873 Program sphinx-build found: NO 00:02:32.873 Configuring rte_build_config.h using configuration 00:02:32.873 Message: 00:02:32.873 ================= 00:02:32.873 Applications Enabled 00:02:32.873 ================= 00:02:32.873 00:02:32.873 apps: 00:02:32.873 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:32.873 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:32.873 test-pmd, test-regex, test-sad, test-security-perf, 00:02:32.873 00:02:32.873 Message: 00:02:32.873 ================= 00:02:32.873 Libraries Enabled 00:02:32.873 ================= 00:02:32.873 00:02:32.873 libs: 00:02:32.873 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:32.873 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:32.873 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:32.873 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:32.873 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:32.873 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:32.873 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:32.873 00:02:32.873 00:02:32.873 Message: 00:02:32.873 =============== 00:02:32.873 Drivers Enabled 00:02:32.873 =============== 00:02:32.873 00:02:32.873 common: 00:02:32.873 00:02:32.873 bus: 00:02:32.873 pci, vdev, 00:02:32.873 mempool: 00:02:32.873 ring, 00:02:32.873 dma: 00:02:32.873 00:02:32.873 net: 00:02:32.873 i40e, 00:02:32.873 raw: 00:02:32.873 00:02:32.873 crypto: 00:02:32.873 00:02:32.873 compress: 00:02:32.873 00:02:32.873 regex: 00:02:32.873 00:02:32.873 ml: 00:02:32.873 00:02:32.873 vdpa: 00:02:32.873 00:02:32.873 event: 00:02:32.873 00:02:32.873 baseband: 00:02:32.873 00:02:32.873 gpu: 00:02:32.873 00:02:32.873 00:02:32.873 Message: 00:02:32.873 ================= 00:02:32.873 Content Skipped 00:02:32.873 ================= 00:02:32.873 00:02:32.873 apps: 00:02:32.873 00:02:32.873 libs: 00:02:32.873 00:02:32.873 drivers: 00:02:32.873 common/cpt: not in enabled drivers build config 00:02:32.873 common/dpaax: not in enabled drivers build config 00:02:32.873 common/iavf: not in enabled drivers build config 00:02:32.873 common/idpf: not in enabled drivers build config 00:02:32.873 common/mvep: not in enabled drivers build config 00:02:32.873 common/octeontx: not in enabled drivers build config 00:02:32.873 bus/auxiliary: not in enabled drivers build config 00:02:32.873 bus/cdx: not in enabled drivers build config 00:02:32.873 bus/dpaa: not in enabled drivers build config 00:02:32.873 bus/fslmc: not in enabled drivers build config 00:02:32.873 bus/ifpga: not in enabled drivers build config 00:02:32.873 bus/platform: not in enabled drivers build config 00:02:32.873 bus/vmbus: not in enabled drivers build config 00:02:32.873 common/cnxk: not in enabled drivers build config 00:02:32.873 common/mlx5: not in enabled drivers build config 00:02:32.873 common/nfp: not in enabled drivers build config 00:02:32.873 common/qat: not in enabled drivers build config 00:02:32.873 common/sfc_efx: not in enabled drivers build config 00:02:32.873 mempool/bucket: not in enabled drivers build config 00:02:32.873 mempool/cnxk: not in enabled drivers build config 00:02:32.873 mempool/dpaa: not in enabled drivers build config 00:02:32.873 mempool/dpaa2: not in enabled drivers build config 00:02:32.873 mempool/octeontx: not in enabled drivers build config 00:02:32.873 mempool/stack: not in enabled drivers build config 00:02:32.873 dma/cnxk: not in enabled drivers build config 00:02:32.873 dma/dpaa: not in enabled drivers build config 00:02:32.873 dma/dpaa2: not in enabled drivers build config 00:02:32.873 dma/hisilicon: not in enabled drivers build config 00:02:32.873 dma/idxd: not in enabled drivers build config 00:02:32.873 dma/ioat: not in enabled drivers build config 00:02:32.873 dma/skeleton: not in enabled drivers build config 00:02:32.873 net/af_packet: not in enabled drivers build config 00:02:32.873 net/af_xdp: not in enabled drivers build config 00:02:32.873 net/ark: not in enabled drivers build config 00:02:32.873 net/atlantic: not in enabled drivers build config 00:02:32.873 net/avp: not in enabled drivers build config 00:02:32.873 net/axgbe: not in enabled drivers build config 00:02:32.873 net/bnx2x: not in enabled drivers build config 00:02:32.873 net/bnxt: not in enabled drivers build config 00:02:32.873 net/bonding: not in enabled drivers build config 00:02:32.873 net/cnxk: not in enabled drivers build config 00:02:32.873 net/cpfl: not in enabled drivers build config 00:02:32.873 net/cxgbe: not in enabled drivers build config 00:02:32.873 net/dpaa: not in enabled drivers build config 00:02:32.873 net/dpaa2: not in enabled drivers build config 00:02:32.873 net/e1000: not in enabled drivers build config 00:02:32.873 net/ena: not in enabled drivers build config 00:02:32.873 net/enetc: not in enabled drivers build config 00:02:32.873 net/enetfec: not in enabled drivers build config 00:02:32.873 net/enic: not in enabled drivers build config 00:02:32.873 net/failsafe: not in enabled drivers build config 00:02:32.873 net/fm10k: not in enabled drivers build config 00:02:32.873 net/gve: not in enabled drivers build config 00:02:32.873 net/hinic: not in enabled drivers build config 00:02:32.873 net/hns3: not in enabled drivers build config 00:02:32.873 net/iavf: not in enabled drivers build config 00:02:32.873 net/ice: not in enabled drivers build config 00:02:32.873 net/idpf: not in enabled drivers build config 00:02:32.873 net/igc: not in enabled drivers build config 00:02:32.873 net/ionic: not in enabled drivers build config 00:02:32.873 net/ipn3ke: not in enabled drivers build config 00:02:32.873 net/ixgbe: not in enabled drivers build config 00:02:32.873 net/mana: not in enabled drivers build config 00:02:32.873 net/memif: not in enabled drivers build config 00:02:32.873 net/mlx4: not in enabled drivers build config 00:02:32.873 net/mlx5: not in enabled drivers build config 00:02:32.873 net/mvneta: not in enabled drivers build config 00:02:32.873 net/mvpp2: not in enabled drivers build config 00:02:32.873 net/netvsc: not in enabled drivers build config 00:02:32.873 net/nfb: not in enabled drivers build config 00:02:32.873 net/nfp: not in enabled drivers build config 00:02:32.873 net/ngbe: not in enabled drivers build config 00:02:32.873 net/null: not in enabled drivers build config 00:02:32.873 net/octeontx: not in enabled drivers build config 00:02:32.873 net/octeon_ep: not in enabled drivers build config 00:02:32.873 net/pcap: not in enabled drivers build config 00:02:32.873 net/pfe: not in enabled drivers build config 00:02:32.873 net/qede: not in enabled drivers build config 00:02:32.873 net/ring: not in enabled drivers build config 00:02:32.873 net/sfc: not in enabled drivers build config 00:02:32.873 net/softnic: not in enabled drivers build config 00:02:32.873 net/tap: not in enabled drivers build config 00:02:32.873 net/thunderx: not in enabled drivers build config 00:02:32.873 net/txgbe: not in enabled drivers build config 00:02:32.873 net/vdev_netvsc: not in enabled drivers build config 00:02:32.873 net/vhost: not in enabled drivers build config 00:02:32.873 net/virtio: not in enabled drivers build config 00:02:32.873 net/vmxnet3: not in enabled drivers build config 00:02:32.873 raw/cnxk_bphy: not in enabled drivers build config 00:02:32.873 raw/cnxk_gpio: not in enabled drivers build config 00:02:32.873 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:32.873 raw/ifpga: not in enabled drivers build config 00:02:32.873 raw/ntb: not in enabled drivers build config 00:02:32.873 raw/skeleton: not in enabled drivers build config 00:02:32.873 crypto/armv8: not in enabled drivers build config 00:02:32.873 crypto/bcmfs: not in enabled drivers build config 00:02:32.873 crypto/caam_jr: not in enabled drivers build config 00:02:32.873 crypto/ccp: not in enabled drivers build config 00:02:32.873 crypto/cnxk: not in enabled drivers build config 00:02:32.873 crypto/dpaa_sec: not in enabled drivers build config 00:02:32.873 crypto/dpaa2_sec: not in enabled drivers build config 00:02:32.873 crypto/ipsec_mb: not in enabled drivers build config 00:02:32.873 crypto/mlx5: not in enabled drivers build config 00:02:32.873 crypto/mvsam: not in enabled drivers build config 00:02:32.873 crypto/nitrox: not in enabled drivers build config 00:02:32.873 crypto/null: not in enabled drivers build config 00:02:32.873 crypto/octeontx: not in enabled drivers build config 00:02:32.873 crypto/openssl: not in enabled drivers build config 00:02:32.873 crypto/scheduler: not in enabled drivers build config 00:02:32.873 crypto/uadk: not in enabled drivers build config 00:02:32.873 crypto/virtio: not in enabled drivers build config 00:02:32.873 compress/isal: not in enabled drivers build config 00:02:32.873 compress/mlx5: not in enabled drivers build config 00:02:32.873 compress/octeontx: not in enabled drivers build config 00:02:32.873 compress/zlib: not in enabled drivers build config 00:02:32.873 regex/mlx5: not in enabled drivers build config 00:02:32.873 regex/cn9k: not in enabled drivers build config 00:02:32.873 ml/cnxk: not in enabled drivers build config 00:02:32.873 vdpa/ifc: not in enabled drivers build config 00:02:32.873 vdpa/mlx5: not in enabled drivers build config 00:02:32.873 vdpa/nfp: not in enabled drivers build config 00:02:32.873 vdpa/sfc: not in enabled drivers build config 00:02:32.873 event/cnxk: not in enabled drivers build config 00:02:32.873 event/dlb2: not in enabled drivers build config 00:02:32.873 event/dpaa: not in enabled drivers build config 00:02:32.873 event/dpaa2: not in enabled drivers build config 00:02:32.873 event/dsw: not in enabled drivers build config 00:02:32.874 event/opdl: not in enabled drivers build config 00:02:32.874 event/skeleton: not in enabled drivers build config 00:02:32.874 event/sw: not in enabled drivers build config 00:02:32.874 event/octeontx: not in enabled drivers build config 00:02:32.874 baseband/acc: not in enabled drivers build config 00:02:32.874 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:32.874 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:32.874 baseband/la12xx: not in enabled drivers build config 00:02:32.874 baseband/null: not in enabled drivers build config 00:02:32.874 baseband/turbo_sw: not in enabled drivers build config 00:02:32.874 gpu/cuda: not in enabled drivers build config 00:02:32.874 00:02:32.874 00:02:32.874 Build targets in project: 220 00:02:32.874 00:02:32.874 DPDK 23.11.0 00:02:32.874 00:02:32.874 User defined options 00:02:32.874 libdir : lib 00:02:32.874 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:32.874 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:32.874 c_link_args : 00:02:32.874 enable_docs : false 00:02:32.874 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:32.874 enable_kmods : false 00:02:32.874 machine : native 00:02:32.874 tests : false 00:02:32.874 00:02:32.874 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.874 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:33.133 06:27:12 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:33.133 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:33.133 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:33.133 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:33.133 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:33.133 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:33.133 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:33.133 [6/710] Linking static target lib/librte_kvargs.a 00:02:33.390 [7/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:33.390 [8/710] Linking static target lib/librte_log.a 00:02:33.390 [9/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:33.390 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:33.390 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.648 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:33.648 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.648 [14/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:33.648 [15/710] Linking target lib/librte_log.so.24.0 00:02:33.906 [16/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:33.906 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:33.906 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:34.164 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:34.164 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:34.164 [21/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:34.164 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:34.164 [23/710] Linking target lib/librte_kvargs.so.24.0 00:02:34.164 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:34.423 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:34.423 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:34.423 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:34.423 [28/710] Linking static target lib/librte_telemetry.a 00:02:34.423 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:34.423 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:34.681 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:34.681 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:34.681 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:34.940 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:34.940 [35/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.940 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:34.940 [37/710] Linking target lib/librte_telemetry.so.24.0 00:02:34.940 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:34.940 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:34.940 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:34.940 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:34.940 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:34.940 [43/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:34.940 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:35.198 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:35.457 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:35.457 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:35.457 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:35.457 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:35.715 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:35.715 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.715 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:35.715 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:35.715 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:35.974 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:35.974 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:35.974 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:35.974 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:35.974 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:35.974 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:36.233 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:36.233 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:36.233 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:36.233 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:36.233 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:36.233 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:36.509 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:36.509 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:36.822 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:36.822 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:36.822 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:36.822 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:36.822 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:36.822 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:36.822 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:36.822 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:36.822 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:37.085 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:37.085 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:37.343 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:37.343 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:37.343 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:37.343 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:37.601 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:37.601 [85/710] Linking static target lib/librte_ring.a 00:02:37.601 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:37.601 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:37.601 [88/710] Linking static target lib/librte_eal.a 00:02:37.859 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.859 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:37.859 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:37.859 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:37.859 [93/710] Linking static target lib/librte_mempool.a 00:02:37.859 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.117 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.117 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.117 [97/710] Linking static target lib/librte_rcu.a 00:02:38.375 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:38.375 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:38.375 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:38.375 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.633 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:38.633 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.633 [104/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.633 [105/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:38.633 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:38.892 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:38.892 [108/710] Linking static target lib/librte_mbuf.a 00:02:38.892 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.892 [110/710] Linking static target lib/librte_net.a 00:02:39.151 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.151 [112/710] Linking static target lib/librte_meter.a 00:02:39.151 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.151 [114/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.408 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.408 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.408 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.408 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.408 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.974 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:39.974 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.232 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:40.491 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.491 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:40.491 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.491 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.491 [127/710] Linking static target lib/librte_pci.a 00:02:40.491 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:40.749 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:40.749 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.749 [131/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.749 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:40.749 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:40.749 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:40.749 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:40.749 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:41.007 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:41.007 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:41.007 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:41.007 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:41.007 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:41.007 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:41.266 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:41.266 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:41.266 [145/710] Linking static target lib/librte_cmdline.a 00:02:41.525 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:41.525 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:41.525 [148/710] Linking static target lib/librte_metrics.a 00:02:41.525 [149/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:41.783 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:42.041 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.041 [152/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:42.041 [153/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.299 [154/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:42.299 [155/710] Linking static target lib/librte_timer.a 00:02:42.557 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.816 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:42.816 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:43.074 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:43.074 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:43.637 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:43.637 [162/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:43.637 [163/710] Linking static target lib/librte_ethdev.a 00:02:43.637 [164/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:43.637 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:43.637 [166/710] Linking static target lib/librte_bitratestats.a 00:02:43.895 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:43.895 [168/710] Linking static target lib/librte_bbdev.a 00:02:43.895 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.895 [170/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:43.895 [171/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.895 [172/710] Linking static target lib/librte_hash.a 00:02:44.152 [173/710] Linking target lib/librte_eal.so.24.0 00:02:44.153 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:44.153 [175/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:44.153 [176/710] Linking target lib/librte_ring.so.24.0 00:02:44.409 [177/710] Linking target lib/librte_meter.so.24.0 00:02:44.409 [178/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:44.409 [179/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:44.409 [180/710] Linking target lib/librte_pci.so.24.0 00:02:44.409 [181/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:44.409 [182/710] Linking target lib/librte_rcu.so.24.0 00:02:44.409 [183/710] Linking target lib/librte_mempool.so.24.0 00:02:44.409 [184/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.409 [185/710] Linking target lib/librte_timer.so.24.0 00:02:44.667 [186/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:44.667 [187/710] Linking static target lib/acl/libavx2_tmp.a 00:02:44.667 [188/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:44.667 [189/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:44.667 [190/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.667 [191/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:44.667 [192/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:44.667 [193/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:44.667 [194/710] Linking target lib/librte_mbuf.so.24.0 00:02:44.667 [195/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:44.667 [196/710] Linking static target lib/acl/libavx512_tmp.a 00:02:44.925 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:44.925 [198/710] Linking target lib/librte_net.so.24.0 00:02:44.925 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:44.925 [200/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:44.925 [201/710] Linking static target lib/librte_acl.a 00:02:44.925 [202/710] Linking target lib/librte_cmdline.so.24.0 00:02:44.925 [203/710] Linking target lib/librte_hash.so.24.0 00:02:45.182 [204/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:45.182 [205/710] Linking target lib/librte_bbdev.so.24.0 00:02:45.182 [206/710] Linking static target lib/librte_cfgfile.a 00:02:45.182 [207/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:45.182 [208/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:45.182 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.440 [210/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:45.440 [211/710] Linking target lib/librte_acl.so.24.0 00:02:45.440 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:45.440 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:45.440 [214/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.440 [215/710] Linking target lib/librte_cfgfile.so.24.0 00:02:45.698 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:45.698 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:45.698 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.956 [219/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:45.956 [220/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.956 [221/710] Linking static target lib/librte_bpf.a 00:02:46.213 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:46.213 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:46.213 [224/710] Linking static target lib/librte_compressdev.a 00:02:46.213 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.470 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.470 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:46.470 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:46.728 [229/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.728 [230/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:46.729 [231/710] Linking static target lib/librte_distributor.a 00:02:46.729 [232/710] Linking target lib/librte_compressdev.so.24.0 00:02:46.729 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.987 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.987 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:46.987 [236/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:46.987 [237/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:46.987 [238/710] Linking static target lib/librte_dmadev.a 00:02:47.552 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.552 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:47.552 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:47.552 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:47.809 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:47.809 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:47.809 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:47.809 [246/710] Linking static target lib/librte_efd.a 00:02:48.067 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.067 [248/710] Linking static target lib/librte_cryptodev.a 00:02:48.067 [249/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.067 [250/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:48.067 [251/710] Linking target lib/librte_efd.so.24.0 00:02:48.633 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.633 [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:48.633 [254/710] Linking static target lib/librte_dispatcher.a 00:02:48.633 [255/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:48.633 [256/710] Linking target lib/librte_ethdev.so.24.0 00:02:48.633 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:48.894 [258/710] Linking target lib/librte_metrics.so.24.0 00:02:48.894 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:48.894 [260/710] Linking target lib/librte_bpf.so.24.0 00:02:48.894 [261/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:48.894 [262/710] Linking target lib/librte_bitratestats.so.24.0 00:02:48.894 [263/710] Linking static target lib/librte_gpudev.a 00:02:48.894 [264/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:49.215 [265/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:49.215 [266/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:49.215 [267/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.215 [268/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:49.472 [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.472 [270/710] Linking target lib/librte_cryptodev.so.24.0 00:02:49.472 [271/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:49.472 [272/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:49.472 [273/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:49.730 [274/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:49.730 [275/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:49.730 [276/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.730 [277/710] Linking static target lib/librte_eventdev.a 00:02:49.730 [278/710] Linking target lib/librte_gpudev.so.24.0 00:02:49.988 [279/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:49.988 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:49.988 [281/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:49.988 [282/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:49.988 [283/710] Linking static target lib/librte_gro.a 00:02:49.988 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:50.246 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:50.246 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:50.246 [287/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.246 [288/710] Linking target lib/librte_gro.so.24.0 00:02:50.246 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:50.503 [290/710] Linking static target lib/librte_gso.a 00:02:50.503 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:50.503 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.503 [293/710] Linking target lib/librte_gso.so.24.0 00:02:50.761 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:50.761 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:50.761 [296/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:50.761 [297/710] Linking static target lib/librte_jobstats.a 00:02:50.761 [298/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:50.761 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:51.027 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:51.027 [301/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:51.027 [302/710] Linking static target lib/librte_ip_frag.a 00:02:51.027 [303/710] Linking static target lib/librte_latencystats.a 00:02:51.027 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.291 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:51.291 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.291 [307/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.291 [308/710] Linking target lib/librte_latencystats.so.24.0 00:02:51.291 [309/710] Linking target lib/librte_ip_frag.so.24.0 00:02:51.291 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:51.291 [311/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:51.549 [312/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:51.549 [313/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:51.549 [314/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.549 [315/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:51.549 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.549 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.806 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.062 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:52.062 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:52.062 [321/710] Linking static target lib/librte_lpm.a 00:02:52.062 [322/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:52.062 [323/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:52.062 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:02:52.062 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.319 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.319 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.319 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:52.319 [329/710] Linking static target lib/librte_pcapng.a 00:02:52.319 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.319 [331/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:52.319 [332/710] Linking target lib/librte_lpm.so.24.0 00:02:52.319 [333/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.576 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:52.576 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.576 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:52.833 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:52.833 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:52.833 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.091 [340/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:53.091 [341/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.091 [342/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.091 [343/710] Linking static target lib/librte_power.a 00:02:53.091 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:53.091 [345/710] Linking static target lib/librte_regexdev.a 00:02:53.091 [346/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:53.091 [347/710] Linking static target lib/librte_rawdev.a 00:02:53.348 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:53.348 [349/710] Linking static target lib/librte_member.a 00:02:53.348 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:53.348 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:53.348 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:53.606 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.606 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:53.606 [355/710] Linking static target lib/librte_mldev.a 00:02:53.606 [356/710] Linking target lib/librte_member.so.24.0 00:02:53.606 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.606 [358/710] Linking target lib/librte_rawdev.so.24.0 00:02:53.606 [359/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.606 [360/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:53.863 [361/710] Linking target lib/librte_power.so.24.0 00:02:53.863 [362/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.863 [363/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:53.863 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:54.122 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:54.122 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.122 [367/710] Linking static target lib/librte_reorder.a 00:02:54.122 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:54.122 [369/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:54.122 [370/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:54.122 [371/710] Linking static target lib/librte_rib.a 00:02:54.380 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:54.380 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:54.380 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:54.380 [375/710] Linking static target lib/librte_stack.a 00:02:54.380 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.639 [377/710] Linking target lib/librte_reorder.so.24.0 00:02:54.639 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.639 [379/710] Linking static target lib/librte_security.a 00:02:54.639 [380/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.639 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.639 [382/710] Linking target lib/librte_stack.so.24.0 00:02:54.639 [383/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:54.639 [384/710] Linking target lib/librte_rib.so.24.0 00:02:54.897 [385/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.897 [386/710] Linking target lib/librte_mldev.so.24.0 00:02:54.897 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:55.154 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.154 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.154 [390/710] Linking target lib/librte_security.so.24.0 00:02:55.154 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.154 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:55.154 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.412 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:55.412 [395/710] Linking static target lib/librte_sched.a 00:02:55.670 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:55.670 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.670 [398/710] Linking target lib/librte_sched.so.24.0 00:02:55.927 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:55.927 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:55.927 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:56.185 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:56.185 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:56.443 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:56.443 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:56.702 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:56.702 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:56.960 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:56.960 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:56.960 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:57.219 [411/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:57.219 [412/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:57.219 [413/710] Linking static target lib/librte_ipsec.a 00:02:57.477 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:57.477 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:57.477 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.477 [417/710] Linking target lib/librte_ipsec.so.24.0 00:02:57.477 [418/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:57.477 [419/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:57.477 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:57.736 [421/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:57.736 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:57.736 [423/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:58.671 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:58.671 [425/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:58.671 [426/710] Linking static target lib/librte_pdcp.a 00:02:58.671 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:58.671 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:58.671 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:58.671 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:58.671 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:58.671 [432/710] Linking static target lib/librte_fib.a 00:02:58.930 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.930 [434/710] Linking target lib/librte_pdcp.so.24.0 00:02:58.930 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.930 [436/710] Linking target lib/librte_fib.so.24.0 00:02:59.188 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:59.446 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:59.704 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:59.704 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:59.704 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:59.704 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:59.962 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:59.962 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:00.237 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:00.237 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:00.237 [447/710] Linking static target lib/librte_port.a 00:03:00.501 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:00.501 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:00.501 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:00.501 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:00.759 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.759 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:00.759 [454/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:00.759 [455/710] Linking target lib/librte_port.so.24.0 00:03:00.759 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:00.759 [457/710] Linking static target lib/librte_pdump.a 00:03:01.016 [458/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:01.016 [459/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.016 [460/710] Linking target lib/librte_pdump.so.24.0 00:03:01.016 [461/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.274 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:01.532 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:01.790 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:01.790 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:01.790 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:01.790 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:01.790 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:02.047 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:02.047 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:02.304 [471/710] Linking static target lib/librte_table.a 00:03:02.304 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:02.304 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:02.869 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:02.869 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.869 [476/710] Linking target lib/librte_table.so.24.0 00:03:02.869 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:03.127 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:03.127 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:03.384 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:03.384 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:03.642 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:03.642 [483/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:03.900 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:03.900 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:03.900 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:04.467 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:04.467 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:04.467 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:04.467 [490/710] Linking static target lib/librte_graph.a 00:03:04.467 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:04.467 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:04.725 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:04.983 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.983 [495/710] Linking target lib/librte_graph.so.24.0 00:03:05.241 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:05.241 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:05.241 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:05.241 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:05.807 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:05.807 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:05.807 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:05.807 [503/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:05.807 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:05.807 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:06.064 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:06.322 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:06.322 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:06.322 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:06.580 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:06.580 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:06.580 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:06.839 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:06.839 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:06.839 [515/710] Linking static target lib/librte_node.a 00:03:07.097 [516/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:07.097 [517/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:07.097 [518/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:07.097 [519/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:07.097 [520/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.356 [521/710] Linking target lib/librte_node.so.24.0 00:03:07.356 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:07.356 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:07.356 [524/710] Linking static target drivers/librte_bus_vdev.a 00:03:07.356 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:07.356 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:07.356 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:07.613 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.613 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:07.613 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:07.613 [531/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:07.613 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:07.613 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:07.871 [534/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:07.871 [535/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:07.871 [536/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:07.871 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:07.871 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.871 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:08.129 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:08.129 [541/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:08.129 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.129 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.129 [544/710] Linking static target drivers/librte_mempool_ring.a 00:03:08.129 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:08.387 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:08.645 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:08.903 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:09.161 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:09.161 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:09.161 [551/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:10.094 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:10.094 [553/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:10.094 [554/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:10.094 [555/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:10.094 [556/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:10.094 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:10.659 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:10.659 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:10.916 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:10.916 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:10.916 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:11.480 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:11.480 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:11.739 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:11.739 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:12.022 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:12.354 [568/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:12.354 [569/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:12.354 [570/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:12.354 [571/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:12.354 [572/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:12.354 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:12.919 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:12.919 [575/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:12.919 [576/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:12.919 [577/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:13.178 [578/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:13.178 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:13.178 [580/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:13.178 [581/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:13.435 [582/710] Linking static target lib/librte_vhost.a 00:03:13.435 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:13.435 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:13.692 [585/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:13.692 [586/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:13.692 [587/710] Linking static target drivers/librte_net_i40e.a 00:03:13.692 [588/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:13.692 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:13.692 [590/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:13.692 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:13.692 [592/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:14.256 [593/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.256 [594/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:14.256 [595/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:14.514 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:14.514 [597/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:14.514 [598/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.771 [599/710] Linking target lib/librte_vhost.so.24.0 00:03:14.771 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:15.029 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:15.029 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:15.029 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:15.287 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:15.546 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:15.546 [606/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:15.546 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:15.803 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:16.060 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:16.060 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:16.060 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:16.060 [612/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:16.318 [613/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:16.318 [614/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:16.318 [615/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:16.318 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:16.575 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:16.575 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:16.833 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:17.090 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:17.090 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:17.090 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:17.348 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:18.280 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:18.280 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:18.280 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:18.280 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:18.280 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:18.537 [629/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:18.537 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:18.537 [631/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:18.537 [632/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:18.537 [633/710] Linking static target lib/librte_pipeline.a 00:03:18.794 [634/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:18.794 [635/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:18.794 [636/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:19.052 [637/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:19.052 [638/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:19.052 [639/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:19.310 [640/710] Linking target app/dpdk-dumpcap 00:03:19.310 [641/710] Linking target app/dpdk-graph 00:03:19.310 [642/710] Linking target app/dpdk-pdump 00:03:19.568 [643/710] Linking target app/dpdk-proc-info 00:03:19.568 [644/710] Linking target app/dpdk-test-acl 00:03:19.568 [645/710] Linking target app/dpdk-test-cmdline 00:03:19.826 [646/710] Linking target app/dpdk-test-compress-perf 00:03:19.826 [647/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:19.826 [648/710] Linking target app/dpdk-test-crypto-perf 00:03:19.826 [649/710] Linking target app/dpdk-test-dma-perf 00:03:20.083 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:20.083 [651/710] Linking target app/dpdk-test-fib 00:03:20.083 [652/710] Linking target app/dpdk-test-gpudev 00:03:20.341 [653/710] Linking target app/dpdk-test-flow-perf 00:03:20.341 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:20.341 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:20.341 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:20.600 [657/710] Linking target app/dpdk-test-eventdev 00:03:20.600 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:20.600 [659/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:20.600 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:20.867 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:21.124 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:21.124 [663/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:21.124 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:21.124 [665/710] Linking target app/dpdk-test-bbdev 00:03:21.124 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:21.124 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:21.690 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:21.690 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:21.690 [670/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.690 [671/710] Linking target lib/librte_pipeline.so.24.0 00:03:21.690 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:21.690 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:21.948 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:21.948 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:22.206 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:22.206 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:22.464 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:22.464 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:22.723 [680/710] Linking target app/dpdk-test-pipeline 00:03:22.723 [681/710] Linking target app/dpdk-test-mldev 00:03:22.723 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:22.723 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:23.288 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:23.289 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:23.289 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:23.289 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:23.546 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:23.804 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:23.804 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:24.071 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:24.071 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:24.071 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:24.655 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:24.655 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:24.655 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:25.221 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:25.221 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:25.221 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:25.480 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:25.480 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:25.480 [702/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:25.480 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:25.480 [704/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:25.738 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:25.738 [706/710] Linking target app/dpdk-test-regex 00:03:25.995 [707/710] Linking target app/dpdk-test-sad 00:03:25.995 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:26.253 [709/710] Linking target app/dpdk-testpmd 00:03:26.512 [710/710] Linking target app/dpdk-test-security-perf 00:03:26.512 06:28:06 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:26.771 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:26.771 [0/1] Installing files. 00:03:27.033 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.033 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.034 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.035 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.036 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.037 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.037 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.037 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.038 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.296 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.296 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.296 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.296 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:27.296 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.296 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:27.296 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.296 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:27.296 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.296 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:27.296 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.296 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.297 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.297 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.297 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.297 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.559 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:27.560 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:27.560 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:27.560 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:27.560 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:27.560 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:27.560 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:27.560 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:27.560 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:27.560 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:27.560 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:27.560 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:27.560 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:27.560 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:27.560 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:27.560 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:27.560 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:27.560 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:27.560 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:27.560 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:27.560 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:27.560 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:27.560 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:27.560 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:27.560 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:27.560 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:27.560 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:27.560 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:27.560 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:27.560 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:27.560 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:27.560 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:27.560 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:27.560 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:27.560 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:27.560 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:27.560 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:27.560 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:27.560 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:27.560 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:27.560 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:27.560 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:27.560 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:27.560 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:27.560 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:27.560 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:27.560 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:27.560 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:27.560 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:27.560 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:27.560 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:27.560 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:27.560 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:27.560 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:27.560 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:27.560 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:27.560 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:27.560 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:27.560 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:27.560 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:27.560 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:27.560 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:27.560 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:27.560 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:27.560 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:27.560 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:27.560 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:27.560 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:27.560 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:27.560 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:27.560 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:27.560 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:27.561 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:27.561 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:27.561 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:27.561 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:27.561 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:27.561 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:27.561 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:27.561 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:27.561 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:27.561 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:27.561 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:27.561 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:27.561 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:27.561 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:27.561 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:27.561 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:27.561 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:27.561 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:27.561 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:27.561 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:27.561 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:27.561 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:27.561 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:27.561 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:27.561 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:27.561 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:27.561 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:27.561 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:27.561 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:27.561 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:27.561 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:27.561 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:27.561 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:27.561 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:27.561 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:27.561 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:27.561 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:27.561 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:27.561 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:27.561 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:27.561 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:27.561 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:27.561 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:27.561 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:27.561 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:27.561 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:27.561 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:27.561 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:27.561 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:27.561 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:27.561 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:27.561 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:27.561 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:27.561 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:27.561 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:27.561 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:27.561 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:27.561 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:27.561 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:27.561 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:27.561 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:27.561 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:27.561 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:27.561 06:28:07 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:27.561 06:28:07 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:27.561 06:28:07 -- common/autobuild_common.sh@200 -- $ cat 00:03:27.561 06:28:07 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:27.561 00:03:27.561 real 1m1.438s 00:03:27.561 user 7m38.298s 00:03:27.561 sys 1m3.823s 00:03:27.561 06:28:07 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:27.561 06:28:07 -- common/autotest_common.sh@10 -- $ set +x 00:03:27.561 ************************************ 00:03:27.561 END TEST build_native_dpdk 00:03:27.561 ************************************ 00:03:27.561 06:28:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:27.561 06:28:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:27.561 06:28:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:27.561 06:28:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:27.561 06:28:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:27.561 06:28:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:27.561 06:28:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:27.561 06:28:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:27.819 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:27.819 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.819 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:27.819 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:28.385 Using 'verbs' RDMA provider 00:03:43.825 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:56.055 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:56.055 Creating mk/config.mk...done. 00:03:56.055 Creating mk/cc.flags.mk...done. 00:03:56.055 Type 'make' to build. 00:03:56.055 06:28:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:56.055 06:28:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:56.055 06:28:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:56.055 06:28:35 -- common/autotest_common.sh@10 -- $ set +x 00:03:56.055 ************************************ 00:03:56.055 START TEST make 00:03:56.055 ************************************ 00:03:56.055 06:28:35 -- common/autotest_common.sh@1104 -- $ make -j10 00:03:56.055 make[1]: Nothing to be done for 'all'. 00:04:22.589 CC lib/log/log.o 00:04:22.589 CC lib/log/log_flags.o 00:04:22.589 CC lib/log/log_deprecated.o 00:04:22.589 CC lib/ut/ut.o 00:04:22.589 CC lib/ut_mock/mock.o 00:04:22.589 LIB libspdk_ut_mock.a 00:04:22.589 LIB libspdk_ut.a 00:04:22.589 LIB libspdk_log.a 00:04:22.589 SO libspdk_ut_mock.so.5.0 00:04:22.589 SO libspdk_ut.so.1.0 00:04:22.589 SO libspdk_log.so.6.1 00:04:22.589 SYMLINK libspdk_ut_mock.so 00:04:22.589 SYMLINK libspdk_ut.so 00:04:22.589 SYMLINK libspdk_log.so 00:04:22.589 CC lib/util/base64.o 00:04:22.589 CC lib/util/bit_array.o 00:04:22.589 CC lib/util/cpuset.o 00:04:22.589 CC lib/util/crc16.o 00:04:22.589 CC lib/util/crc32.o 00:04:22.589 CC lib/ioat/ioat.o 00:04:22.589 CC lib/util/crc32c.o 00:04:22.589 CC lib/dma/dma.o 00:04:22.589 CXX lib/trace_parser/trace.o 00:04:22.589 CC lib/vfio_user/host/vfio_user_pci.o 00:04:22.589 CC lib/vfio_user/host/vfio_user.o 00:04:22.589 CC lib/util/crc32_ieee.o 00:04:22.589 CC lib/util/crc64.o 00:04:22.589 CC lib/util/dif.o 00:04:22.589 LIB libspdk_dma.a 00:04:22.589 CC lib/util/fd.o 00:04:22.589 SO libspdk_dma.so.3.0 00:04:22.589 CC lib/util/file.o 00:04:22.589 SYMLINK libspdk_dma.so 00:04:22.589 CC lib/util/hexlify.o 00:04:22.589 CC lib/util/iov.o 00:04:22.589 LIB libspdk_ioat.a 00:04:22.589 CC lib/util/math.o 00:04:22.589 SO libspdk_ioat.so.6.0 00:04:22.589 CC lib/util/pipe.o 00:04:22.589 LIB libspdk_vfio_user.a 00:04:22.589 CC lib/util/strerror_tls.o 00:04:22.589 CC lib/util/string.o 00:04:22.589 SO libspdk_vfio_user.so.4.0 00:04:22.589 SYMLINK libspdk_ioat.so 00:04:22.589 CC lib/util/uuid.o 00:04:22.589 CC lib/util/fd_group.o 00:04:22.589 CC lib/util/xor.o 00:04:22.589 SYMLINK libspdk_vfio_user.so 00:04:22.589 CC lib/util/zipf.o 00:04:22.589 LIB libspdk_util.a 00:04:22.589 SO libspdk_util.so.8.0 00:04:22.589 SYMLINK libspdk_util.so 00:04:22.589 LIB libspdk_trace_parser.a 00:04:22.589 SO libspdk_trace_parser.so.4.0 00:04:22.589 CC lib/conf/conf.o 00:04:22.589 CC lib/idxd/idxd.o 00:04:22.589 CC lib/idxd/idxd_user.o 00:04:22.590 CC lib/json/json_util.o 00:04:22.590 CC lib/json/json_parse.o 00:04:22.590 CC lib/rdma/common.o 00:04:22.590 CC lib/idxd/idxd_kernel.o 00:04:22.590 CC lib/vmd/vmd.o 00:04:22.590 CC lib/env_dpdk/env.o 00:04:22.590 SYMLINK libspdk_trace_parser.so 00:04:22.590 CC lib/env_dpdk/memory.o 00:04:22.590 CC lib/env_dpdk/pci.o 00:04:22.590 LIB libspdk_conf.a 00:04:22.590 CC lib/env_dpdk/init.o 00:04:22.590 CC lib/vmd/led.o 00:04:22.590 SO libspdk_conf.so.5.0 00:04:22.590 CC lib/json/json_write.o 00:04:22.590 CC lib/rdma/rdma_verbs.o 00:04:22.590 SYMLINK libspdk_conf.so 00:04:22.590 CC lib/env_dpdk/threads.o 00:04:22.590 CC lib/env_dpdk/pci_ioat.o 00:04:22.590 CC lib/env_dpdk/pci_virtio.o 00:04:22.590 LIB libspdk_rdma.a 00:04:22.590 CC lib/env_dpdk/pci_vmd.o 00:04:22.590 SO libspdk_rdma.so.5.0 00:04:22.590 LIB libspdk_idxd.a 00:04:22.590 CC lib/env_dpdk/pci_idxd.o 00:04:22.590 SO libspdk_idxd.so.11.0 00:04:22.590 SYMLINK libspdk_rdma.so 00:04:22.590 LIB libspdk_json.a 00:04:22.590 CC lib/env_dpdk/pci_event.o 00:04:22.590 CC lib/env_dpdk/sigbus_handler.o 00:04:22.590 CC lib/env_dpdk/pci_dpdk.o 00:04:22.590 SYMLINK libspdk_idxd.so 00:04:22.590 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:22.590 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:22.590 SO libspdk_json.so.5.1 00:04:22.590 LIB libspdk_vmd.a 00:04:22.590 SYMLINK libspdk_json.so 00:04:22.590 SO libspdk_vmd.so.5.0 00:04:22.590 SYMLINK libspdk_vmd.so 00:04:22.590 CC lib/jsonrpc/jsonrpc_server.o 00:04:22.590 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:22.590 CC lib/jsonrpc/jsonrpc_client.o 00:04:22.590 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:22.590 LIB libspdk_jsonrpc.a 00:04:22.590 SO libspdk_jsonrpc.so.5.1 00:04:22.590 SYMLINK libspdk_jsonrpc.so 00:04:22.590 LIB libspdk_env_dpdk.a 00:04:22.590 CC lib/rpc/rpc.o 00:04:22.590 SO libspdk_env_dpdk.so.13.0 00:04:22.590 LIB libspdk_rpc.a 00:04:22.590 SO libspdk_rpc.so.5.0 00:04:22.590 SYMLINK libspdk_env_dpdk.so 00:04:22.590 SYMLINK libspdk_rpc.so 00:04:22.590 CC lib/notify/notify.o 00:04:22.590 CC lib/notify/notify_rpc.o 00:04:22.590 CC lib/sock/sock.o 00:04:22.590 CC lib/sock/sock_rpc.o 00:04:22.590 CC lib/trace/trace.o 00:04:22.590 CC lib/trace/trace_flags.o 00:04:22.590 CC lib/trace/trace_rpc.o 00:04:22.590 LIB libspdk_notify.a 00:04:22.590 SO libspdk_notify.so.5.0 00:04:22.590 LIB libspdk_trace.a 00:04:22.590 SO libspdk_trace.so.9.0 00:04:22.590 SYMLINK libspdk_notify.so 00:04:22.590 LIB libspdk_sock.a 00:04:22.590 SYMLINK libspdk_trace.so 00:04:22.590 SO libspdk_sock.so.8.0 00:04:22.590 SYMLINK libspdk_sock.so 00:04:22.590 CC lib/thread/thread.o 00:04:22.590 CC lib/thread/iobuf.o 00:04:22.590 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:22.590 CC lib/nvme/nvme_fabric.o 00:04:22.590 CC lib/nvme/nvme_ctrlr.o 00:04:22.590 CC lib/nvme/nvme_ns.o 00:04:22.590 CC lib/nvme/nvme_pcie.o 00:04:22.590 CC lib/nvme/nvme_ns_cmd.o 00:04:22.590 CC lib/nvme/nvme_pcie_common.o 00:04:22.590 CC lib/nvme/nvme_qpair.o 00:04:22.849 CC lib/nvme/nvme.o 00:04:23.108 CC lib/nvme/nvme_quirks.o 00:04:23.108 CC lib/nvme/nvme_transport.o 00:04:23.366 CC lib/nvme/nvme_discovery.o 00:04:23.366 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:23.366 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:23.366 CC lib/nvme/nvme_tcp.o 00:04:23.624 CC lib/nvme/nvme_opal.o 00:04:23.624 CC lib/nvme/nvme_io_msg.o 00:04:23.882 CC lib/nvme/nvme_poll_group.o 00:04:23.882 CC lib/nvme/nvme_zns.o 00:04:23.882 CC lib/nvme/nvme_cuse.o 00:04:23.882 LIB libspdk_thread.a 00:04:23.882 SO libspdk_thread.so.9.0 00:04:24.141 CC lib/nvme/nvme_vfio_user.o 00:04:24.141 SYMLINK libspdk_thread.so 00:04:24.141 CC lib/nvme/nvme_rdma.o 00:04:24.398 CC lib/accel/accel.o 00:04:24.398 CC lib/blob/blobstore.o 00:04:24.398 CC lib/init/json_config.o 00:04:24.656 CC lib/init/subsystem.o 00:04:24.656 CC lib/virtio/virtio.o 00:04:24.656 CC lib/virtio/virtio_vhost_user.o 00:04:24.657 CC lib/virtio/virtio_vfio_user.o 00:04:24.657 CC lib/init/subsystem_rpc.o 00:04:24.657 CC lib/init/rpc.o 00:04:24.915 CC lib/virtio/virtio_pci.o 00:04:24.915 CC lib/blob/request.o 00:04:24.915 CC lib/accel/accel_rpc.o 00:04:24.915 CC lib/blob/zeroes.o 00:04:24.915 CC lib/blob/blob_bs_dev.o 00:04:24.915 LIB libspdk_init.a 00:04:24.915 SO libspdk_init.so.4.0 00:04:24.915 CC lib/accel/accel_sw.o 00:04:25.173 SYMLINK libspdk_init.so 00:04:25.173 LIB libspdk_virtio.a 00:04:25.173 SO libspdk_virtio.so.6.0 00:04:25.173 CC lib/event/app.o 00:04:25.173 CC lib/event/reactor.o 00:04:25.173 CC lib/event/log_rpc.o 00:04:25.173 CC lib/event/app_rpc.o 00:04:25.173 CC lib/event/scheduler_static.o 00:04:25.173 SYMLINK libspdk_virtio.so 00:04:25.432 LIB libspdk_accel.a 00:04:25.432 LIB libspdk_nvme.a 00:04:25.432 SO libspdk_accel.so.14.0 00:04:25.690 SYMLINK libspdk_accel.so 00:04:25.690 LIB libspdk_event.a 00:04:25.690 SO libspdk_nvme.so.12.0 00:04:25.690 SO libspdk_event.so.12.0 00:04:25.690 CC lib/bdev/bdev.o 00:04:25.690 CC lib/bdev/bdev_rpc.o 00:04:25.690 CC lib/bdev/scsi_nvme.o 00:04:25.690 CC lib/bdev/bdev_zone.o 00:04:25.690 CC lib/bdev/part.o 00:04:25.690 SYMLINK libspdk_event.so 00:04:25.948 SYMLINK libspdk_nvme.so 00:04:27.325 LIB libspdk_blob.a 00:04:27.325 SO libspdk_blob.so.10.1 00:04:27.325 SYMLINK libspdk_blob.so 00:04:27.585 CC lib/blobfs/blobfs.o 00:04:27.585 CC lib/blobfs/tree.o 00:04:27.585 CC lib/lvol/lvol.o 00:04:28.569 LIB libspdk_blobfs.a 00:04:28.569 SO libspdk_blobfs.so.9.0 00:04:28.569 LIB libspdk_bdev.a 00:04:28.569 LIB libspdk_lvol.a 00:04:28.569 SYMLINK libspdk_blobfs.so 00:04:28.569 SO libspdk_bdev.so.14.0 00:04:28.569 SO libspdk_lvol.so.9.1 00:04:28.569 SYMLINK libspdk_lvol.so 00:04:28.569 SYMLINK libspdk_bdev.so 00:04:28.827 CC lib/ublk/ublk.o 00:04:28.827 CC lib/ublk/ublk_rpc.o 00:04:28.827 CC lib/scsi/dev.o 00:04:28.827 CC lib/scsi/lun.o 00:04:28.827 CC lib/scsi/port.o 00:04:28.827 CC lib/scsi/scsi.o 00:04:28.827 CC lib/nbd/nbd.o 00:04:28.828 CC lib/nvmf/ctrlr.o 00:04:28.828 CC lib/scsi/scsi_bdev.o 00:04:28.828 CC lib/ftl/ftl_core.o 00:04:28.828 CC lib/scsi/scsi_pr.o 00:04:28.828 CC lib/nbd/nbd_rpc.o 00:04:28.828 CC lib/ftl/ftl_init.o 00:04:29.085 CC lib/nvmf/ctrlr_discovery.o 00:04:29.085 CC lib/nvmf/ctrlr_bdev.o 00:04:29.085 CC lib/nvmf/subsystem.o 00:04:29.085 CC lib/ftl/ftl_layout.o 00:04:29.085 CC lib/ftl/ftl_debug.o 00:04:29.085 LIB libspdk_nbd.a 00:04:29.343 CC lib/scsi/scsi_rpc.o 00:04:29.343 SO libspdk_nbd.so.6.0 00:04:29.343 CC lib/scsi/task.o 00:04:29.343 SYMLINK libspdk_nbd.so 00:04:29.343 CC lib/ftl/ftl_io.o 00:04:29.343 LIB libspdk_ublk.a 00:04:29.343 SO libspdk_ublk.so.2.0 00:04:29.343 CC lib/ftl/ftl_sb.o 00:04:29.343 CC lib/ftl/ftl_l2p.o 00:04:29.343 SYMLINK libspdk_ublk.so 00:04:29.343 CC lib/ftl/ftl_l2p_flat.o 00:04:29.343 CC lib/ftl/ftl_nv_cache.o 00:04:29.601 LIB libspdk_scsi.a 00:04:29.601 CC lib/nvmf/nvmf.o 00:04:29.601 CC lib/ftl/ftl_band.o 00:04:29.601 SO libspdk_scsi.so.8.0 00:04:29.601 CC lib/ftl/ftl_band_ops.o 00:04:29.601 CC lib/ftl/ftl_writer.o 00:04:29.601 CC lib/nvmf/nvmf_rpc.o 00:04:29.601 SYMLINK libspdk_scsi.so 00:04:29.601 CC lib/nvmf/transport.o 00:04:29.859 CC lib/nvmf/tcp.o 00:04:29.859 CC lib/nvmf/rdma.o 00:04:29.859 CC lib/ftl/ftl_rq.o 00:04:30.117 CC lib/iscsi/conn.o 00:04:30.117 CC lib/iscsi/init_grp.o 00:04:30.117 CC lib/iscsi/iscsi.o 00:04:30.374 CC lib/iscsi/md5.o 00:04:30.374 CC lib/ftl/ftl_reloc.o 00:04:30.374 CC lib/ftl/ftl_l2p_cache.o 00:04:30.374 CC lib/iscsi/param.o 00:04:30.374 CC lib/iscsi/portal_grp.o 00:04:30.630 CC lib/iscsi/tgt_node.o 00:04:30.630 CC lib/vhost/vhost.o 00:04:30.630 CC lib/iscsi/iscsi_subsystem.o 00:04:30.630 CC lib/iscsi/iscsi_rpc.o 00:04:30.630 CC lib/iscsi/task.o 00:04:30.630 CC lib/ftl/ftl_p2l.o 00:04:30.887 CC lib/vhost/vhost_rpc.o 00:04:30.887 CC lib/vhost/vhost_scsi.o 00:04:30.887 CC lib/ftl/mngt/ftl_mngt.o 00:04:31.143 CC lib/vhost/vhost_blk.o 00:04:31.144 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:31.144 CC lib/vhost/rte_vhost_user.o 00:04:31.399 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:31.399 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:31.399 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:31.399 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:31.399 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:31.399 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:31.657 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:31.657 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:31.657 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:31.657 LIB libspdk_iscsi.a 00:04:31.657 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:31.657 SO libspdk_iscsi.so.7.0 00:04:31.657 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:31.915 CC lib/ftl/utils/ftl_conf.o 00:04:31.915 CC lib/ftl/utils/ftl_md.o 00:04:31.915 SYMLINK libspdk_iscsi.so 00:04:31.915 CC lib/ftl/utils/ftl_mempool.o 00:04:31.915 CC lib/ftl/utils/ftl_bitmap.o 00:04:31.915 CC lib/ftl/utils/ftl_property.o 00:04:31.915 LIB libspdk_nvmf.a 00:04:31.915 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:31.915 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:31.915 SO libspdk_nvmf.so.17.0 00:04:32.173 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:32.173 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:32.173 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:32.173 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:32.173 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:32.173 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:32.173 LIB libspdk_vhost.a 00:04:32.173 SYMLINK libspdk_nvmf.so 00:04:32.173 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:32.173 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:32.173 SO libspdk_vhost.so.7.1 00:04:32.173 CC lib/ftl/base/ftl_base_dev.o 00:04:32.173 CC lib/ftl/base/ftl_base_bdev.o 00:04:32.173 CC lib/ftl/ftl_trace.o 00:04:32.430 SYMLINK libspdk_vhost.so 00:04:32.430 LIB libspdk_ftl.a 00:04:32.688 SO libspdk_ftl.so.8.0 00:04:32.946 SYMLINK libspdk_ftl.so 00:04:33.203 CC module/env_dpdk/env_dpdk_rpc.o 00:04:33.460 CC module/accel/dsa/accel_dsa.o 00:04:33.460 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:33.460 CC module/sock/uring/uring.o 00:04:33.460 CC module/blob/bdev/blob_bdev.o 00:04:33.460 CC module/accel/ioat/accel_ioat.o 00:04:33.460 CC module/accel/error/accel_error.o 00:04:33.460 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:33.460 CC module/accel/iaa/accel_iaa.o 00:04:33.460 CC module/sock/posix/posix.o 00:04:33.460 LIB libspdk_env_dpdk_rpc.a 00:04:33.460 SO libspdk_env_dpdk_rpc.so.5.0 00:04:33.460 LIB libspdk_scheduler_dpdk_governor.a 00:04:33.460 SYMLINK libspdk_env_dpdk_rpc.so 00:04:33.460 CC module/accel/ioat/accel_ioat_rpc.o 00:04:33.460 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:33.460 CC module/accel/error/accel_error_rpc.o 00:04:33.460 LIB libspdk_scheduler_dynamic.a 00:04:33.460 CC module/accel/iaa/accel_iaa_rpc.o 00:04:33.460 SO libspdk_scheduler_dynamic.so.3.0 00:04:33.461 CC module/accel/dsa/accel_dsa_rpc.o 00:04:33.718 LIB libspdk_blob_bdev.a 00:04:33.718 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:33.718 SYMLINK libspdk_scheduler_dynamic.so 00:04:33.718 SO libspdk_blob_bdev.so.10.1 00:04:33.718 CC module/scheduler/gscheduler/gscheduler.o 00:04:33.718 LIB libspdk_accel_ioat.a 00:04:33.718 SYMLINK libspdk_blob_bdev.so 00:04:33.718 LIB libspdk_accel_error.a 00:04:33.718 SO libspdk_accel_ioat.so.5.0 00:04:33.718 LIB libspdk_accel_iaa.a 00:04:33.718 SO libspdk_accel_error.so.1.0 00:04:33.718 LIB libspdk_accel_dsa.a 00:04:33.718 SO libspdk_accel_iaa.so.2.0 00:04:33.718 SO libspdk_accel_dsa.so.4.0 00:04:33.718 SYMLINK libspdk_accel_ioat.so 00:04:33.718 SYMLINK libspdk_accel_error.so 00:04:33.718 SYMLINK libspdk_accel_iaa.so 00:04:33.718 LIB libspdk_scheduler_gscheduler.a 00:04:33.718 SYMLINK libspdk_accel_dsa.so 00:04:34.090 SO libspdk_scheduler_gscheduler.so.3.0 00:04:34.090 CC module/blobfs/bdev/blobfs_bdev.o 00:04:34.090 CC module/bdev/error/vbdev_error.o 00:04:34.090 CC module/bdev/gpt/gpt.o 00:04:34.090 CC module/bdev/delay/vbdev_delay.o 00:04:34.090 CC module/bdev/lvol/vbdev_lvol.o 00:04:34.090 SYMLINK libspdk_scheduler_gscheduler.so 00:04:34.090 CC module/bdev/gpt/vbdev_gpt.o 00:04:34.090 CC module/bdev/malloc/bdev_malloc.o 00:04:34.090 CC module/bdev/null/bdev_null.o 00:04:34.090 LIB libspdk_sock_uring.a 00:04:34.090 SO libspdk_sock_uring.so.4.0 00:04:34.090 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:34.090 LIB libspdk_sock_posix.a 00:04:34.090 CC module/bdev/null/bdev_null_rpc.o 00:04:34.090 SYMLINK libspdk_sock_uring.so 00:04:34.090 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:34.090 SO libspdk_sock_posix.so.5.0 00:04:34.090 CC module/bdev/error/vbdev_error_rpc.o 00:04:34.355 LIB libspdk_bdev_gpt.a 00:04:34.355 SYMLINK libspdk_sock_posix.so 00:04:34.355 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:34.355 SO libspdk_bdev_gpt.so.5.0 00:04:34.355 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:34.355 LIB libspdk_blobfs_bdev.a 00:04:34.355 LIB libspdk_bdev_null.a 00:04:34.355 SYMLINK libspdk_bdev_gpt.so 00:04:34.355 LIB libspdk_bdev_malloc.a 00:04:34.355 SO libspdk_blobfs_bdev.so.5.0 00:04:34.355 SO libspdk_bdev_null.so.5.0 00:04:34.355 LIB libspdk_bdev_error.a 00:04:34.355 SO libspdk_bdev_malloc.so.5.0 00:04:34.355 CC module/bdev/nvme/bdev_nvme.o 00:04:34.355 SO libspdk_bdev_error.so.5.0 00:04:34.355 SYMLINK libspdk_blobfs_bdev.so 00:04:34.355 LIB libspdk_bdev_delay.a 00:04:34.355 SYMLINK libspdk_bdev_null.so 00:04:34.355 CC module/bdev/passthru/vbdev_passthru.o 00:04:34.355 SYMLINK libspdk_bdev_malloc.so 00:04:34.355 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:34.355 CC module/bdev/raid/bdev_raid.o 00:04:34.355 SYMLINK libspdk_bdev_error.so 00:04:34.355 SO libspdk_bdev_delay.so.5.0 00:04:34.613 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:34.613 CC module/bdev/split/vbdev_split.o 00:04:34.613 SYMLINK libspdk_bdev_delay.so 00:04:34.613 CC module/bdev/uring/bdev_uring.o 00:04:34.613 LIB libspdk_bdev_lvol.a 00:04:34.613 CC module/bdev/aio/bdev_aio.o 00:04:34.613 SO libspdk_bdev_lvol.so.5.0 00:04:34.613 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:34.613 CC module/bdev/ftl/bdev_ftl.o 00:04:34.613 SYMLINK libspdk_bdev_lvol.so 00:04:34.871 CC module/bdev/split/vbdev_split_rpc.o 00:04:34.871 CC module/bdev/iscsi/bdev_iscsi.o 00:04:34.871 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:34.871 LIB libspdk_bdev_passthru.a 00:04:34.871 CC module/bdev/uring/bdev_uring_rpc.o 00:04:34.871 SO libspdk_bdev_passthru.so.5.0 00:04:34.871 LIB libspdk_bdev_split.a 00:04:34.871 CC module/bdev/aio/bdev_aio_rpc.o 00:04:34.871 SO libspdk_bdev_split.so.5.0 00:04:35.128 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:35.128 SYMLINK libspdk_bdev_passthru.so 00:04:35.128 CC module/bdev/raid/bdev_raid_rpc.o 00:04:35.128 LIB libspdk_bdev_zone_block.a 00:04:35.128 SYMLINK libspdk_bdev_split.so 00:04:35.128 CC module/bdev/raid/bdev_raid_sb.o 00:04:35.128 SO libspdk_bdev_zone_block.so.5.0 00:04:35.128 LIB libspdk_bdev_uring.a 00:04:35.128 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:35.128 LIB libspdk_bdev_aio.a 00:04:35.128 SYMLINK libspdk_bdev_zone_block.so 00:04:35.128 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:35.128 SO libspdk_bdev_uring.so.5.0 00:04:35.128 SO libspdk_bdev_aio.so.5.0 00:04:35.128 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:35.128 LIB libspdk_bdev_ftl.a 00:04:35.128 SYMLINK libspdk_bdev_uring.so 00:04:35.128 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:35.128 SYMLINK libspdk_bdev_aio.so 00:04:35.128 CC module/bdev/raid/raid0.o 00:04:35.385 SO libspdk_bdev_ftl.so.5.0 00:04:35.385 CC module/bdev/raid/raid1.o 00:04:35.385 CC module/bdev/raid/concat.o 00:04:35.385 SYMLINK libspdk_bdev_ftl.so 00:04:35.385 CC module/bdev/nvme/nvme_rpc.o 00:04:35.385 CC module/bdev/nvme/bdev_mdns_client.o 00:04:35.385 LIB libspdk_bdev_iscsi.a 00:04:35.385 SO libspdk_bdev_iscsi.so.5.0 00:04:35.385 CC module/bdev/nvme/vbdev_opal.o 00:04:35.385 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:35.385 SYMLINK libspdk_bdev_iscsi.so 00:04:35.385 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:35.643 LIB libspdk_bdev_raid.a 00:04:35.643 SO libspdk_bdev_raid.so.5.0 00:04:35.643 LIB libspdk_bdev_virtio.a 00:04:35.643 SO libspdk_bdev_virtio.so.5.0 00:04:35.643 SYMLINK libspdk_bdev_raid.so 00:04:35.643 SYMLINK libspdk_bdev_virtio.so 00:04:36.577 LIB libspdk_bdev_nvme.a 00:04:36.577 SO libspdk_bdev_nvme.so.6.0 00:04:36.835 SYMLINK libspdk_bdev_nvme.so 00:04:37.091 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:37.091 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:37.091 CC module/event/subsystems/vmd/vmd.o 00:04:37.091 CC module/event/subsystems/scheduler/scheduler.o 00:04:37.091 CC module/event/subsystems/iobuf/iobuf.o 00:04:37.091 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:37.091 CC module/event/subsystems/sock/sock.o 00:04:37.091 LIB libspdk_event_vhost_blk.a 00:04:37.091 LIB libspdk_event_scheduler.a 00:04:37.348 LIB libspdk_event_sock.a 00:04:37.348 LIB libspdk_event_vmd.a 00:04:37.348 SO libspdk_event_vhost_blk.so.2.0 00:04:37.348 SO libspdk_event_scheduler.so.3.0 00:04:37.348 SO libspdk_event_sock.so.4.0 00:04:37.348 LIB libspdk_event_iobuf.a 00:04:37.348 SO libspdk_event_vmd.so.5.0 00:04:37.348 SO libspdk_event_iobuf.so.2.0 00:04:37.348 SYMLINK libspdk_event_vhost_blk.so 00:04:37.348 SYMLINK libspdk_event_scheduler.so 00:04:37.348 SYMLINK libspdk_event_sock.so 00:04:37.348 SYMLINK libspdk_event_vmd.so 00:04:37.348 SYMLINK libspdk_event_iobuf.so 00:04:37.605 CC module/event/subsystems/accel/accel.o 00:04:37.605 LIB libspdk_event_accel.a 00:04:37.863 SO libspdk_event_accel.so.5.0 00:04:37.863 SYMLINK libspdk_event_accel.so 00:04:38.121 CC module/event/subsystems/bdev/bdev.o 00:04:38.121 LIB libspdk_event_bdev.a 00:04:38.378 SO libspdk_event_bdev.so.5.0 00:04:38.378 SYMLINK libspdk_event_bdev.so 00:04:38.378 CC module/event/subsystems/scsi/scsi.o 00:04:38.378 CC module/event/subsystems/ublk/ublk.o 00:04:38.378 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:38.378 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:38.378 CC module/event/subsystems/nbd/nbd.o 00:04:38.635 LIB libspdk_event_ublk.a 00:04:38.635 LIB libspdk_event_nbd.a 00:04:38.635 LIB libspdk_event_scsi.a 00:04:38.635 SO libspdk_event_ublk.so.2.0 00:04:38.635 SO libspdk_event_nbd.so.5.0 00:04:38.635 SO libspdk_event_scsi.so.5.0 00:04:38.635 SYMLINK libspdk_event_ublk.so 00:04:38.892 LIB libspdk_event_nvmf.a 00:04:38.892 SYMLINK libspdk_event_nbd.so 00:04:38.892 SYMLINK libspdk_event_scsi.so 00:04:38.892 SO libspdk_event_nvmf.so.5.0 00:04:38.893 SYMLINK libspdk_event_nvmf.so 00:04:38.893 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:38.893 CC module/event/subsystems/iscsi/iscsi.o 00:04:39.151 LIB libspdk_event_vhost_scsi.a 00:04:39.151 LIB libspdk_event_iscsi.a 00:04:39.151 SO libspdk_event_vhost_scsi.so.2.0 00:04:39.151 SO libspdk_event_iscsi.so.5.0 00:04:39.151 SYMLINK libspdk_event_vhost_scsi.so 00:04:39.151 SYMLINK libspdk_event_iscsi.so 00:04:39.410 SO libspdk.so.5.0 00:04:39.410 SYMLINK libspdk.so 00:04:39.410 CC app/trace_record/trace_record.o 00:04:39.410 CC app/spdk_nvme_identify/identify.o 00:04:39.410 CC app/spdk_nvme_perf/perf.o 00:04:39.410 CC app/spdk_lspci/spdk_lspci.o 00:04:39.410 CXX app/trace/trace.o 00:04:39.668 CC app/iscsi_tgt/iscsi_tgt.o 00:04:39.668 CC app/spdk_tgt/spdk_tgt.o 00:04:39.668 CC examples/accel/perf/accel_perf.o 00:04:39.668 CC app/nvmf_tgt/nvmf_main.o 00:04:39.668 CC test/accel/dif/dif.o 00:04:39.668 LINK spdk_lspci 00:04:39.668 LINK iscsi_tgt 00:04:39.926 LINK spdk_tgt 00:04:39.926 LINK spdk_trace_record 00:04:39.926 LINK nvmf_tgt 00:04:39.926 CC app/spdk_nvme_discover/discovery_aer.o 00:04:39.926 LINK spdk_trace 00:04:40.185 CC app/spdk_top/spdk_top.o 00:04:40.185 LINK dif 00:04:40.185 LINK accel_perf 00:04:40.185 CC app/vhost/vhost.o 00:04:40.185 LINK spdk_nvme_discover 00:04:40.185 CC app/spdk_dd/spdk_dd.o 00:04:40.185 CC examples/bdev/hello_world/hello_bdev.o 00:04:40.442 LINK vhost 00:04:40.442 LINK spdk_nvme_perf 00:04:40.442 LINK spdk_nvme_identify 00:04:40.442 CC test/app/bdev_svc/bdev_svc.o 00:04:40.442 CC examples/ioat/perf/perf.o 00:04:40.442 CC examples/blob/hello_world/hello_blob.o 00:04:40.442 LINK hello_bdev 00:04:40.699 CC examples/nvme/hello_world/hello_world.o 00:04:40.699 LINK bdev_svc 00:04:40.699 LINK spdk_dd 00:04:40.699 CC examples/sock/hello_world/hello_sock.o 00:04:40.699 CC examples/vmd/lsvmd/lsvmd.o 00:04:40.699 LINK ioat_perf 00:04:40.699 LINK hello_blob 00:04:40.699 CC examples/nvmf/nvmf/nvmf.o 00:04:40.957 LINK hello_world 00:04:40.957 LINK lsvmd 00:04:40.957 LINK spdk_top 00:04:40.957 CC examples/bdev/bdevperf/bdevperf.o 00:04:40.957 LINK hello_sock 00:04:40.957 CC examples/ioat/verify/verify.o 00:04:40.957 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:41.215 CC examples/nvme/reconnect/reconnect.o 00:04:41.215 CC test/bdev/bdevio/bdevio.o 00:04:41.215 CC examples/blob/cli/blobcli.o 00:04:41.215 LINK nvmf 00:04:41.215 CC examples/vmd/led/led.o 00:04:41.215 CC app/fio/nvme/fio_plugin.o 00:04:41.215 LINK verify 00:04:41.215 CC examples/util/zipf/zipf.o 00:04:41.472 LINK led 00:04:41.472 LINK reconnect 00:04:41.472 LINK zipf 00:04:41.472 LINK nvme_fuzz 00:04:41.472 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:41.472 CC examples/thread/thread/thread_ex.o 00:04:41.472 LINK bdevio 00:04:41.730 LINK blobcli 00:04:41.730 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:41.730 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:41.730 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:41.730 TEST_HEADER include/spdk/accel.h 00:04:41.730 LINK bdevperf 00:04:41.730 TEST_HEADER include/spdk/accel_module.h 00:04:41.730 CC test/blobfs/mkfs/mkfs.o 00:04:41.730 TEST_HEADER include/spdk/assert.h 00:04:41.730 TEST_HEADER include/spdk/barrier.h 00:04:41.730 TEST_HEADER include/spdk/base64.h 00:04:41.730 TEST_HEADER include/spdk/bdev.h 00:04:41.730 TEST_HEADER include/spdk/bdev_module.h 00:04:41.730 TEST_HEADER include/spdk/bdev_zone.h 00:04:41.730 LINK spdk_nvme 00:04:41.730 TEST_HEADER include/spdk/bit_array.h 00:04:41.730 LINK thread 00:04:41.730 TEST_HEADER include/spdk/bit_pool.h 00:04:41.730 TEST_HEADER include/spdk/blob_bdev.h 00:04:41.730 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:41.730 TEST_HEADER include/spdk/blobfs.h 00:04:41.730 TEST_HEADER include/spdk/blob.h 00:04:41.730 TEST_HEADER include/spdk/conf.h 00:04:41.730 TEST_HEADER include/spdk/config.h 00:04:41.730 TEST_HEADER include/spdk/cpuset.h 00:04:41.730 TEST_HEADER include/spdk/crc16.h 00:04:41.730 TEST_HEADER include/spdk/crc32.h 00:04:41.730 TEST_HEADER include/spdk/crc64.h 00:04:41.730 TEST_HEADER include/spdk/dif.h 00:04:41.730 TEST_HEADER include/spdk/dma.h 00:04:41.730 TEST_HEADER include/spdk/endian.h 00:04:41.730 TEST_HEADER include/spdk/env_dpdk.h 00:04:41.988 TEST_HEADER include/spdk/env.h 00:04:41.988 TEST_HEADER include/spdk/event.h 00:04:41.988 TEST_HEADER include/spdk/fd_group.h 00:04:41.988 TEST_HEADER include/spdk/fd.h 00:04:41.988 TEST_HEADER include/spdk/file.h 00:04:41.988 TEST_HEADER include/spdk/ftl.h 00:04:41.988 TEST_HEADER include/spdk/gpt_spec.h 00:04:41.988 TEST_HEADER include/spdk/hexlify.h 00:04:41.988 TEST_HEADER include/spdk/histogram_data.h 00:04:41.988 TEST_HEADER include/spdk/idxd.h 00:04:41.988 TEST_HEADER include/spdk/idxd_spec.h 00:04:41.988 TEST_HEADER include/spdk/init.h 00:04:41.988 TEST_HEADER include/spdk/ioat.h 00:04:41.988 TEST_HEADER include/spdk/ioat_spec.h 00:04:41.988 TEST_HEADER include/spdk/iscsi_spec.h 00:04:41.988 TEST_HEADER include/spdk/json.h 00:04:41.988 TEST_HEADER include/spdk/jsonrpc.h 00:04:41.988 TEST_HEADER include/spdk/likely.h 00:04:41.988 TEST_HEADER include/spdk/log.h 00:04:41.988 TEST_HEADER include/spdk/lvol.h 00:04:41.988 TEST_HEADER include/spdk/memory.h 00:04:41.988 TEST_HEADER include/spdk/mmio.h 00:04:41.988 TEST_HEADER include/spdk/nbd.h 00:04:41.988 TEST_HEADER include/spdk/notify.h 00:04:41.988 TEST_HEADER include/spdk/nvme.h 00:04:41.988 TEST_HEADER include/spdk/nvme_intel.h 00:04:41.988 CC test/dma/test_dma/test_dma.o 00:04:41.988 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:41.988 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:41.988 TEST_HEADER include/spdk/nvme_spec.h 00:04:41.988 TEST_HEADER include/spdk/nvme_zns.h 00:04:41.988 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:41.988 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:41.988 TEST_HEADER include/spdk/nvmf.h 00:04:41.988 TEST_HEADER include/spdk/nvmf_spec.h 00:04:41.988 TEST_HEADER include/spdk/nvmf_transport.h 00:04:41.988 TEST_HEADER include/spdk/opal.h 00:04:41.988 TEST_HEADER include/spdk/opal_spec.h 00:04:41.988 TEST_HEADER include/spdk/pci_ids.h 00:04:41.988 TEST_HEADER include/spdk/pipe.h 00:04:41.988 TEST_HEADER include/spdk/queue.h 00:04:41.989 TEST_HEADER include/spdk/reduce.h 00:04:41.989 TEST_HEADER include/spdk/rpc.h 00:04:41.989 TEST_HEADER include/spdk/scheduler.h 00:04:41.989 TEST_HEADER include/spdk/scsi.h 00:04:41.989 TEST_HEADER include/spdk/scsi_spec.h 00:04:41.989 TEST_HEADER include/spdk/sock.h 00:04:41.989 TEST_HEADER include/spdk/stdinc.h 00:04:41.989 TEST_HEADER include/spdk/string.h 00:04:41.989 TEST_HEADER include/spdk/thread.h 00:04:41.989 TEST_HEADER include/spdk/trace.h 00:04:41.989 TEST_HEADER include/spdk/trace_parser.h 00:04:41.989 TEST_HEADER include/spdk/tree.h 00:04:41.989 TEST_HEADER include/spdk/ublk.h 00:04:41.989 TEST_HEADER include/spdk/util.h 00:04:41.989 TEST_HEADER include/spdk/uuid.h 00:04:41.989 TEST_HEADER include/spdk/version.h 00:04:41.989 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:41.989 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:41.989 TEST_HEADER include/spdk/vhost.h 00:04:41.989 TEST_HEADER include/spdk/vmd.h 00:04:41.989 TEST_HEADER include/spdk/xor.h 00:04:41.989 TEST_HEADER include/spdk/zipf.h 00:04:41.989 CC test/env/mem_callbacks/mem_callbacks.o 00:04:41.989 CXX test/cpp_headers/accel.o 00:04:41.989 CC app/fio/bdev/fio_plugin.o 00:04:41.989 LINK mkfs 00:04:41.989 LINK vhost_fuzz 00:04:41.989 CC test/app/histogram_perf/histogram_perf.o 00:04:42.247 CC examples/idxd/perf/perf.o 00:04:42.247 CXX test/cpp_headers/accel_module.o 00:04:42.247 LINK nvme_manage 00:04:42.247 CC test/app/jsoncat/jsoncat.o 00:04:42.247 LINK histogram_perf 00:04:42.247 CC test/app/stub/stub.o 00:04:42.247 CXX test/cpp_headers/assert.o 00:04:42.247 LINK test_dma 00:04:42.505 LINK jsoncat 00:04:42.505 CC examples/nvme/arbitration/arbitration.o 00:04:42.505 CC test/env/vtophys/vtophys.o 00:04:42.505 LINK stub 00:04:42.505 CXX test/cpp_headers/barrier.o 00:04:42.505 LINK spdk_bdev 00:04:42.505 LINK idxd_perf 00:04:42.505 LINK mem_callbacks 00:04:42.763 LINK vtophys 00:04:42.763 CXX test/cpp_headers/base64.o 00:04:42.763 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:42.763 CXX test/cpp_headers/bdev.o 00:04:42.763 CXX test/cpp_headers/bdev_module.o 00:04:42.763 CC test/event/event_perf/event_perf.o 00:04:42.763 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:42.763 LINK arbitration 00:04:42.763 CC test/event/reactor/reactor.o 00:04:42.763 LINK event_perf 00:04:42.763 LINK interrupt_tgt 00:04:42.763 CC test/event/reactor_perf/reactor_perf.o 00:04:43.021 CXX test/cpp_headers/bdev_zone.o 00:04:43.021 CC test/event/app_repeat/app_repeat.o 00:04:43.021 LINK env_dpdk_post_init 00:04:43.021 CC test/event/scheduler/scheduler.o 00:04:43.021 LINK reactor 00:04:43.021 LINK reactor_perf 00:04:43.021 CC examples/nvme/hotplug/hotplug.o 00:04:43.021 CXX test/cpp_headers/bit_array.o 00:04:43.021 LINK app_repeat 00:04:43.279 CC test/env/memory/memory_ut.o 00:04:43.279 LINK iscsi_fuzz 00:04:43.279 CC test/env/pci/pci_ut.o 00:04:43.279 LINK scheduler 00:04:43.279 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:43.279 CC test/lvol/esnap/esnap.o 00:04:43.279 CXX test/cpp_headers/bit_pool.o 00:04:43.279 CC test/nvme/aer/aer.o 00:04:43.279 LINK hotplug 00:04:43.279 CC test/rpc_client/rpc_client_test.o 00:04:43.537 CXX test/cpp_headers/blob_bdev.o 00:04:43.537 CXX test/cpp_headers/blobfs_bdev.o 00:04:43.537 LINK cmb_copy 00:04:43.537 CC test/nvme/reset/reset.o 00:04:43.537 CC test/nvme/sgl/sgl.o 00:04:43.537 LINK rpc_client_test 00:04:43.537 LINK aer 00:04:43.537 LINK pci_ut 00:04:43.794 CXX test/cpp_headers/blobfs.o 00:04:43.794 CC test/nvme/e2edp/nvme_dp.o 00:04:43.794 CC examples/nvme/abort/abort.o 00:04:43.794 LINK reset 00:04:43.794 CC test/nvme/overhead/overhead.o 00:04:43.794 LINK sgl 00:04:43.794 CC test/thread/poller_perf/poller_perf.o 00:04:43.794 CXX test/cpp_headers/blob.o 00:04:44.052 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:44.052 LINK nvme_dp 00:04:44.052 LINK poller_perf 00:04:44.052 CXX test/cpp_headers/conf.o 00:04:44.052 CC test/nvme/err_injection/err_injection.o 00:04:44.052 CC test/nvme/startup/startup.o 00:04:44.052 LINK memory_ut 00:04:44.052 LINK abort 00:04:44.052 LINK overhead 00:04:44.052 LINK pmr_persistence 00:04:44.310 CXX test/cpp_headers/config.o 00:04:44.310 LINK startup 00:04:44.310 CC test/nvme/reserve/reserve.o 00:04:44.310 LINK err_injection 00:04:44.310 CXX test/cpp_headers/cpuset.o 00:04:44.310 CC test/nvme/simple_copy/simple_copy.o 00:04:44.310 CXX test/cpp_headers/crc16.o 00:04:44.310 CXX test/cpp_headers/crc32.o 00:04:44.310 CXX test/cpp_headers/crc64.o 00:04:44.310 CC test/nvme/connect_stress/connect_stress.o 00:04:44.570 CXX test/cpp_headers/dif.o 00:04:44.570 CC test/nvme/boot_partition/boot_partition.o 00:04:44.570 LINK reserve 00:04:44.570 CC test/nvme/compliance/nvme_compliance.o 00:04:44.570 CC test/nvme/fused_ordering/fused_ordering.o 00:04:44.570 CXX test/cpp_headers/dma.o 00:04:44.570 LINK simple_copy 00:04:44.570 CXX test/cpp_headers/endian.o 00:04:44.570 LINK connect_stress 00:04:44.570 CXX test/cpp_headers/env_dpdk.o 00:04:44.570 LINK boot_partition 00:04:44.831 CXX test/cpp_headers/env.o 00:04:44.831 CXX test/cpp_headers/event.o 00:04:44.831 LINK fused_ordering 00:04:44.831 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:44.831 CC test/nvme/fdp/fdp.o 00:04:44.831 CXX test/cpp_headers/fd_group.o 00:04:44.831 CC test/nvme/cuse/cuse.o 00:04:44.831 LINK nvme_compliance 00:04:44.831 CXX test/cpp_headers/fd.o 00:04:44.831 CXX test/cpp_headers/file.o 00:04:44.831 CXX test/cpp_headers/ftl.o 00:04:44.831 CXX test/cpp_headers/gpt_spec.o 00:04:45.089 LINK doorbell_aers 00:04:45.089 CXX test/cpp_headers/hexlify.o 00:04:45.089 CXX test/cpp_headers/histogram_data.o 00:04:45.089 CXX test/cpp_headers/idxd.o 00:04:45.089 CXX test/cpp_headers/idxd_spec.o 00:04:45.089 CXX test/cpp_headers/init.o 00:04:45.089 CXX test/cpp_headers/ioat.o 00:04:45.089 LINK fdp 00:04:45.089 CXX test/cpp_headers/ioat_spec.o 00:04:45.089 CXX test/cpp_headers/iscsi_spec.o 00:04:45.089 CXX test/cpp_headers/json.o 00:04:45.348 CXX test/cpp_headers/jsonrpc.o 00:04:45.348 CXX test/cpp_headers/likely.o 00:04:45.348 CXX test/cpp_headers/log.o 00:04:45.348 CXX test/cpp_headers/lvol.o 00:04:45.348 CXX test/cpp_headers/memory.o 00:04:45.348 CXX test/cpp_headers/mmio.o 00:04:45.348 CXX test/cpp_headers/nbd.o 00:04:45.348 CXX test/cpp_headers/notify.o 00:04:45.348 CXX test/cpp_headers/nvme.o 00:04:45.348 CXX test/cpp_headers/nvme_intel.o 00:04:45.348 CXX test/cpp_headers/nvme_ocssd.o 00:04:45.348 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:45.348 CXX test/cpp_headers/nvme_spec.o 00:04:45.607 CXX test/cpp_headers/nvme_zns.o 00:04:45.607 CXX test/cpp_headers/nvmf_cmd.o 00:04:45.607 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:45.607 CXX test/cpp_headers/nvmf.o 00:04:45.607 CXX test/cpp_headers/nvmf_spec.o 00:04:45.607 CXX test/cpp_headers/nvmf_transport.o 00:04:45.607 CXX test/cpp_headers/opal.o 00:04:45.607 CXX test/cpp_headers/opal_spec.o 00:04:45.607 CXX test/cpp_headers/pci_ids.o 00:04:45.607 CXX test/cpp_headers/pipe.o 00:04:45.607 CXX test/cpp_headers/queue.o 00:04:45.865 CXX test/cpp_headers/reduce.o 00:04:45.865 CXX test/cpp_headers/rpc.o 00:04:45.865 CXX test/cpp_headers/scheduler.o 00:04:45.865 CXX test/cpp_headers/scsi.o 00:04:45.865 CXX test/cpp_headers/scsi_spec.o 00:04:45.865 CXX test/cpp_headers/sock.o 00:04:45.865 CXX test/cpp_headers/stdinc.o 00:04:45.865 CXX test/cpp_headers/string.o 00:04:45.865 LINK cuse 00:04:45.865 CXX test/cpp_headers/thread.o 00:04:45.865 CXX test/cpp_headers/trace.o 00:04:45.865 CXX test/cpp_headers/trace_parser.o 00:04:45.865 CXX test/cpp_headers/tree.o 00:04:45.865 CXX test/cpp_headers/ublk.o 00:04:46.124 CXX test/cpp_headers/util.o 00:04:46.124 CXX test/cpp_headers/uuid.o 00:04:46.124 CXX test/cpp_headers/version.o 00:04:46.124 CXX test/cpp_headers/vfio_user_pci.o 00:04:46.124 CXX test/cpp_headers/vfio_user_spec.o 00:04:46.124 CXX test/cpp_headers/vhost.o 00:04:46.124 CXX test/cpp_headers/vmd.o 00:04:46.124 CXX test/cpp_headers/xor.o 00:04:46.124 CXX test/cpp_headers/zipf.o 00:04:48.044 LINK esnap 00:04:48.303 00:04:48.303 real 0m52.948s 00:04:48.303 user 4m57.687s 00:04:48.303 sys 0m56.576s 00:04:48.303 06:29:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:48.303 06:29:28 -- common/autotest_common.sh@10 -- $ set +x 00:04:48.303 ************************************ 00:04:48.303 END TEST make 00:04:48.303 ************************************ 00:04:48.303 06:29:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.303 06:29:28 -- nvmf/common.sh@7 -- # uname -s 00:04:48.303 06:29:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.303 06:29:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.303 06:29:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.303 06:29:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.303 06:29:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.303 06:29:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.303 06:29:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.303 06:29:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.303 06:29:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.561 06:29:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.561 06:29:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:04:48.561 06:29:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:04:48.561 06:29:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.561 06:29:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.561 06:29:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:48.561 06:29:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.561 06:29:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.561 06:29:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.561 06:29:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.561 06:29:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.561 06:29:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.561 06:29:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.561 06:29:28 -- paths/export.sh@5 -- # export PATH 00:04:48.561 06:29:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.561 06:29:28 -- nvmf/common.sh@46 -- # : 0 00:04:48.562 06:29:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:48.562 06:29:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:48.562 06:29:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:48.562 06:29:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.562 06:29:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.562 06:29:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:48.562 06:29:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:48.562 06:29:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:48.562 06:29:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:48.562 06:29:28 -- spdk/autotest.sh@32 -- # uname -s 00:04:48.562 06:29:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:48.562 06:29:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:48.562 06:29:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:48.562 06:29:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:48.562 06:29:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:48.562 06:29:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:48.562 06:29:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:48.562 06:29:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:48.562 06:29:28 -- spdk/autotest.sh@48 -- # udevadm_pid=59990 00:04:48.562 06:29:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:48.562 06:29:28 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:48.562 06:29:28 -- spdk/autotest.sh@54 -- # echo 59993 00:04:48.562 06:29:28 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:48.562 06:29:28 -- spdk/autotest.sh@56 -- # echo 59995 00:04:48.562 06:29:28 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:48.562 06:29:28 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:48.562 06:29:28 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:48.562 06:29:28 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:48.562 06:29:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:48.562 06:29:28 -- common/autotest_common.sh@10 -- # set +x 00:04:48.562 06:29:28 -- spdk/autotest.sh@70 -- # create_test_list 00:04:48.562 06:29:28 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:48.562 06:29:28 -- common/autotest_common.sh@10 -- # set +x 00:04:48.562 06:29:28 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:48.562 06:29:28 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:48.562 06:29:28 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:48.562 06:29:28 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:48.562 06:29:28 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:48.562 06:29:28 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:48.562 06:29:28 -- common/autotest_common.sh@1440 -- # uname 00:04:48.562 06:29:28 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:48.562 06:29:28 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:48.562 06:29:28 -- common/autotest_common.sh@1460 -- # uname 00:04:48.562 06:29:28 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:48.562 06:29:28 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:48.562 06:29:28 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:48.562 06:29:28 -- spdk/autotest.sh@83 -- # hash lcov 00:04:48.562 06:29:28 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:48.562 06:29:28 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:48.562 --rc lcov_branch_coverage=1 00:04:48.562 --rc lcov_function_coverage=1 00:04:48.562 --rc genhtml_branch_coverage=1 00:04:48.562 --rc genhtml_function_coverage=1 00:04:48.562 --rc genhtml_legend=1 00:04:48.562 --rc geninfo_all_blocks=1 00:04:48.562 ' 00:04:48.562 06:29:28 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:48.562 --rc lcov_branch_coverage=1 00:04:48.562 --rc lcov_function_coverage=1 00:04:48.562 --rc genhtml_branch_coverage=1 00:04:48.562 --rc genhtml_function_coverage=1 00:04:48.562 --rc genhtml_legend=1 00:04:48.562 --rc geninfo_all_blocks=1 00:04:48.562 ' 00:04:48.562 06:29:28 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:48.562 --rc lcov_branch_coverage=1 00:04:48.562 --rc lcov_function_coverage=1 00:04:48.562 --rc genhtml_branch_coverage=1 00:04:48.562 --rc genhtml_function_coverage=1 00:04:48.562 --rc genhtml_legend=1 00:04:48.562 --rc geninfo_all_blocks=1 00:04:48.562 --no-external' 00:04:48.562 06:29:28 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:48.562 --rc lcov_branch_coverage=1 00:04:48.562 --rc lcov_function_coverage=1 00:04:48.562 --rc genhtml_branch_coverage=1 00:04:48.562 --rc genhtml_function_coverage=1 00:04:48.562 --rc genhtml_legend=1 00:04:48.562 --rc geninfo_all_blocks=1 00:04:48.562 --no-external' 00:04:48.562 06:29:28 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:48.820 lcov: LCOV version 1.14 00:04:48.820 06:29:28 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:58.784 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:58.784 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:58.784 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:58.784 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:58.784 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:58.784 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:16.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:16.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:16.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:16.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:18.247 06:29:58 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:18.247 06:29:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:18.247 06:29:58 -- common/autotest_common.sh@10 -- # set +x 00:05:18.247 06:29:58 -- spdk/autotest.sh@102 -- # rm -f 00:05:18.247 06:29:58 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.183 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:19.183 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:19.183 06:29:58 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:19.183 06:29:58 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:19.183 06:29:58 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:19.183 06:29:58 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:19.183 06:29:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:19.183 06:29:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:19.183 06:29:58 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:19.183 06:29:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:19.183 06:29:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:19.183 06:29:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:19.183 06:29:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:19.183 06:29:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:19.183 06:29:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:19.183 06:29:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:19.183 06:29:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:19.183 06:29:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:19.183 06:29:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:19.183 06:29:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:19.183 06:29:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:19.183 06:29:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:19.183 06:29:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:19.183 06:29:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:19.183 06:29:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:19.183 06:29:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:19.183 06:29:58 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:19.183 06:29:58 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:19.183 06:29:58 -- spdk/autotest.sh@121 -- # grep -v p 00:05:19.183 06:29:58 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:19.183 06:29:58 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:19.183 06:29:58 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:19.183 06:29:58 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:19.183 06:29:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:19.183 No valid GPT data, bailing 00:05:19.183 06:29:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:19.183 06:29:58 -- scripts/common.sh@393 -- # pt= 00:05:19.183 06:29:58 -- scripts/common.sh@394 -- # return 1 00:05:19.183 06:29:58 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:19.183 1+0 records in 00:05:19.183 1+0 records out 00:05:19.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427459 s, 245 MB/s 00:05:19.183 06:29:58 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:19.183 06:29:58 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:19.183 06:29:58 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:05:19.183 06:29:58 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:19.183 06:29:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:19.183 No valid GPT data, bailing 00:05:19.183 06:29:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:19.183 06:29:59 -- scripts/common.sh@393 -- # pt= 00:05:19.183 06:29:59 -- scripts/common.sh@394 -- # return 1 00:05:19.183 06:29:59 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:19.183 1+0 records in 00:05:19.183 1+0 records out 00:05:19.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452357 s, 232 MB/s 00:05:19.183 06:29:59 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:19.183 06:29:59 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:19.183 06:29:59 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:05:19.183 06:29:59 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:19.183 06:29:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:19.183 No valid GPT data, bailing 00:05:19.183 06:29:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:19.183 06:29:59 -- scripts/common.sh@393 -- # pt= 00:05:19.183 06:29:59 -- scripts/common.sh@394 -- # return 1 00:05:19.183 06:29:59 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:19.183 1+0 records in 00:05:19.183 1+0 records out 00:05:19.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442172 s, 237 MB/s 00:05:19.183 06:29:59 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:19.183 06:29:59 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:19.183 06:29:59 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:05:19.184 06:29:59 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:19.184 06:29:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:19.442 No valid GPT data, bailing 00:05:19.442 06:29:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:19.442 06:29:59 -- scripts/common.sh@393 -- # pt= 00:05:19.442 06:29:59 -- scripts/common.sh@394 -- # return 1 00:05:19.442 06:29:59 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:19.442 1+0 records in 00:05:19.442 1+0 records out 00:05:19.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445555 s, 235 MB/s 00:05:19.442 06:29:59 -- spdk/autotest.sh@129 -- # sync 00:05:19.442 06:29:59 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:19.442 06:29:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:19.442 06:29:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:21.345 06:30:01 -- spdk/autotest.sh@135 -- # uname -s 00:05:21.345 06:30:01 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:21.345 06:30:01 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:21.345 06:30:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.345 06:30:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.345 06:30:01 -- common/autotest_common.sh@10 -- # set +x 00:05:21.345 ************************************ 00:05:21.345 START TEST setup.sh 00:05:21.345 ************************************ 00:05:21.345 06:30:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:21.345 * Looking for test storage... 00:05:21.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:21.345 06:30:01 -- setup/test-setup.sh@10 -- # uname -s 00:05:21.345 06:30:01 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:21.345 06:30:01 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:21.345 06:30:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.345 06:30:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.345 06:30:01 -- common/autotest_common.sh@10 -- # set +x 00:05:21.345 ************************************ 00:05:21.345 START TEST acl 00:05:21.345 ************************************ 00:05:21.345 06:30:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:21.603 * Looking for test storage... 00:05:21.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:21.603 06:30:01 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:21.603 06:30:01 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:21.603 06:30:01 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:21.603 06:30:01 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:21.603 06:30:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:21.603 06:30:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:21.603 06:30:01 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:21.603 06:30:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:21.603 06:30:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:21.603 06:30:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:21.603 06:30:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:21.603 06:30:01 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:21.603 06:30:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:21.603 06:30:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:21.603 06:30:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:21.603 06:30:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:21.603 06:30:01 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:21.603 06:30:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:21.604 06:30:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:21.604 06:30:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:21.604 06:30:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:21.604 06:30:01 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:21.604 06:30:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:21.604 06:30:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:21.604 06:30:01 -- setup/acl.sh@12 -- # devs=() 00:05:21.604 06:30:01 -- setup/acl.sh@12 -- # declare -a devs 00:05:21.604 06:30:01 -- setup/acl.sh@13 -- # drivers=() 00:05:21.604 06:30:01 -- setup/acl.sh@13 -- # declare -A drivers 00:05:21.604 06:30:01 -- setup/acl.sh@51 -- # setup reset 00:05:21.604 06:30:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.604 06:30:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.171 06:30:01 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:22.171 06:30:01 -- setup/acl.sh@16 -- # local dev driver 00:05:22.171 06:30:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:22.171 06:30:01 -- setup/acl.sh@15 -- # setup output status 00:05:22.171 06:30:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.171 06:30:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:22.430 Hugepages 00:05:22.430 node hugesize free / total 00:05:22.430 06:30:02 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:22.430 06:30:02 -- setup/acl.sh@19 -- # continue 00:05:22.430 06:30:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:22.430 00:05:22.430 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.430 06:30:02 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:22.430 06:30:02 -- setup/acl.sh@19 -- # continue 00:05:22.430 06:30:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:22.430 06:30:02 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:22.430 06:30:02 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:22.430 06:30:02 -- setup/acl.sh@20 -- # continue 00:05:22.430 06:30:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:22.430 06:30:02 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:22.430 06:30:02 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:22.430 06:30:02 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:22.430 06:30:02 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:22.430 06:30:02 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:22.430 06:30:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:22.430 06:30:02 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:22.430 06:30:02 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:22.430 06:30:02 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:22.430 06:30:02 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:22.430 06:30:02 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:22.430 06:30:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:22.430 06:30:02 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:22.430 06:30:02 -- setup/acl.sh@54 -- # run_test denied denied 00:05:22.689 06:30:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.689 06:30:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.689 06:30:02 -- common/autotest_common.sh@10 -- # set +x 00:05:22.689 ************************************ 00:05:22.689 START TEST denied 00:05:22.689 ************************************ 00:05:22.689 06:30:02 -- common/autotest_common.sh@1104 -- # denied 00:05:22.689 06:30:02 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:22.689 06:30:02 -- setup/acl.sh@38 -- # setup output config 00:05:22.689 06:30:02 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:22.689 06:30:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.689 06:30:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.624 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:23.624 06:30:03 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:23.624 06:30:03 -- setup/acl.sh@28 -- # local dev driver 00:05:23.624 06:30:03 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:23.624 06:30:03 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:23.624 06:30:03 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:23.624 06:30:03 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:23.624 06:30:03 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:23.624 06:30:03 -- setup/acl.sh@41 -- # setup reset 00:05:23.624 06:30:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.624 06:30:03 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.262 00:05:24.262 real 0m1.463s 00:05:24.262 user 0m0.563s 00:05:24.262 sys 0m0.806s 00:05:24.262 06:30:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.262 06:30:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.262 ************************************ 00:05:24.262 END TEST denied 00:05:24.262 ************************************ 00:05:24.262 06:30:03 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:24.262 06:30:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.262 06:30:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.262 06:30:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.262 ************************************ 00:05:24.262 START TEST allowed 00:05:24.262 ************************************ 00:05:24.262 06:30:03 -- common/autotest_common.sh@1104 -- # allowed 00:05:24.262 06:30:03 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:24.262 06:30:03 -- setup/acl.sh@45 -- # setup output config 00:05:24.262 06:30:03 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:24.262 06:30:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.262 06:30:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:24.851 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.851 06:30:04 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:24.851 06:30:04 -- setup/acl.sh@28 -- # local dev driver 00:05:24.851 06:30:04 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:24.851 06:30:04 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:24.851 06:30:04 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:24.851 06:30:04 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:24.851 06:30:04 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:24.851 06:30:04 -- setup/acl.sh@48 -- # setup reset 00:05:24.851 06:30:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:24.851 06:30:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.788 00:05:25.788 real 0m1.515s 00:05:25.788 user 0m0.624s 00:05:25.788 sys 0m0.877s 00:05:25.788 06:30:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.788 06:30:05 -- common/autotest_common.sh@10 -- # set +x 00:05:25.788 ************************************ 00:05:25.788 END TEST allowed 00:05:25.788 ************************************ 00:05:25.788 00:05:25.788 real 0m4.208s 00:05:25.788 user 0m1.755s 00:05:25.788 sys 0m2.374s 00:05:25.788 06:30:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.788 06:30:05 -- common/autotest_common.sh@10 -- # set +x 00:05:25.788 ************************************ 00:05:25.788 END TEST acl 00:05:25.788 ************************************ 00:05:25.788 06:30:05 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:25.788 06:30:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.788 06:30:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.788 06:30:05 -- common/autotest_common.sh@10 -- # set +x 00:05:25.788 ************************************ 00:05:25.788 START TEST hugepages 00:05:25.788 ************************************ 00:05:25.788 06:30:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:25.788 * Looking for test storage... 00:05:25.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:25.788 06:30:05 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:25.788 06:30:05 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:25.788 06:30:05 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:25.788 06:30:05 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:25.788 06:30:05 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:25.788 06:30:05 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:25.788 06:30:05 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:25.788 06:30:05 -- setup/common.sh@18 -- # local node= 00:05:25.788 06:30:05 -- setup/common.sh@19 -- # local var val 00:05:25.788 06:30:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.788 06:30:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.788 06:30:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.788 06:30:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.788 06:30:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.788 06:30:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4597972 kB' 'MemAvailable: 7368280 kB' 'Buffers: 2436 kB' 'Cached: 2974464 kB' 'SwapCached: 0 kB' 'Active: 433544 kB' 'Inactive: 2645504 kB' 'Active(anon): 112644 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 104056 kB' 'Mapped: 48984 kB' 'Shmem: 10492 kB' 'KReclaimable: 81684 kB' 'Slab: 159640 kB' 'SReclaimable: 81684 kB' 'SUnreclaim: 77956 kB' 'KernelStack: 6784 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 333300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.788 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.788 06:30:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # continue 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.789 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.789 06:30:05 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.789 06:30:05 -- setup/common.sh@33 -- # echo 2048 00:05:25.789 06:30:05 -- setup/common.sh@33 -- # return 0 00:05:25.789 06:30:05 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:25.789 06:30:05 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:25.789 06:30:05 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:25.789 06:30:05 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:25.789 06:30:05 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:25.789 06:30:05 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:25.789 06:30:05 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:25.789 06:30:05 -- setup/hugepages.sh@207 -- # get_nodes 00:05:25.789 06:30:05 -- setup/hugepages.sh@27 -- # local node 00:05:25.789 06:30:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.789 06:30:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:25.789 06:30:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.789 06:30:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.789 06:30:05 -- setup/hugepages.sh@208 -- # clear_hp 00:05:25.789 06:30:05 -- setup/hugepages.sh@37 -- # local node hp 00:05:25.790 06:30:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:25.790 06:30:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.790 06:30:05 -- setup/hugepages.sh@41 -- # echo 0 00:05:25.790 06:30:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.790 06:30:05 -- setup/hugepages.sh@41 -- # echo 0 00:05:25.790 06:30:05 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:25.790 06:30:05 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:25.790 06:30:05 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:25.790 06:30:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.790 06:30:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.790 06:30:05 -- common/autotest_common.sh@10 -- # set +x 00:05:25.790 ************************************ 00:05:25.790 START TEST default_setup 00:05:25.790 ************************************ 00:05:25.790 06:30:05 -- common/autotest_common.sh@1104 -- # default_setup 00:05:25.790 06:30:05 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:25.790 06:30:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:25.790 06:30:05 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:25.790 06:30:05 -- setup/hugepages.sh@51 -- # shift 00:05:25.790 06:30:05 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:25.790 06:30:05 -- setup/hugepages.sh@52 -- # local node_ids 00:05:25.790 06:30:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.790 06:30:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:25.790 06:30:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:25.790 06:30:05 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:25.790 06:30:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.790 06:30:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:25.790 06:30:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:25.790 06:30:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.790 06:30:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.790 06:30:05 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:25.790 06:30:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:25.790 06:30:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:25.790 06:30:05 -- setup/hugepages.sh@73 -- # return 0 00:05:25.790 06:30:05 -- setup/hugepages.sh@137 -- # setup output 00:05:25.790 06:30:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.790 06:30:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.618 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.618 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.618 06:30:06 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:26.618 06:30:06 -- setup/hugepages.sh@89 -- # local node 00:05:26.618 06:30:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.618 06:30:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.618 06:30:06 -- setup/hugepages.sh@92 -- # local surp 00:05:26.618 06:30:06 -- setup/hugepages.sh@93 -- # local resv 00:05:26.618 06:30:06 -- setup/hugepages.sh@94 -- # local anon 00:05:26.618 06:30:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.618 06:30:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.618 06:30:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.618 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:26.618 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:26.618 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.618 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.618 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.618 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.618 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.619 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6688652 kB' 'MemAvailable: 9458748 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449772 kB' 'Inactive: 2645512 kB' 'Active(anon): 128872 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645512 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120136 kB' 'Mapped: 48856 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159288 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78040 kB' 'KernelStack: 6736 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.619 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.619 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.620 06:30:06 -- setup/common.sh@33 -- # echo 0 00:05:26.620 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:26.620 06:30:06 -- setup/hugepages.sh@97 -- # anon=0 00:05:26.620 06:30:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.620 06:30:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.620 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:26.620 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:26.620 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.620 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.620 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.620 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.620 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.620 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6688652 kB' 'MemAvailable: 9458748 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449324 kB' 'Inactive: 2645512 kB' 'Active(anon): 128424 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645512 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119556 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159284 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78036 kB' 'KernelStack: 6720 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.620 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.620 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.621 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.621 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.622 06:30:06 -- setup/common.sh@33 -- # echo 0 00:05:26.622 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:26.622 06:30:06 -- setup/hugepages.sh@99 -- # surp=0 00:05:26.622 06:30:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.622 06:30:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.622 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:26.622 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:26.622 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.622 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.622 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.622 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.622 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.622 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6688652 kB' 'MemAvailable: 9458748 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449324 kB' 'Inactive: 2645512 kB' 'Active(anon): 128424 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645512 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119556 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159284 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78036 kB' 'KernelStack: 6720 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.622 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.622 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.623 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.623 06:30:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.624 06:30:06 -- setup/common.sh@33 -- # echo 0 00:05:26.624 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:26.624 06:30:06 -- setup/hugepages.sh@100 -- # resv=0 00:05:26.624 nr_hugepages=1024 00:05:26.624 06:30:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:26.624 resv_hugepages=0 00:05:26.624 06:30:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.624 surplus_hugepages=0 00:05:26.624 06:30:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.624 anon_hugepages=0 00:05:26.624 06:30:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.624 06:30:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.624 06:30:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:26.624 06:30:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.624 06:30:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.624 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:26.624 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:26.624 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.624 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.624 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.624 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.624 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.624 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6688652 kB' 'MemAvailable: 9458748 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449332 kB' 'Inactive: 2645512 kB' 'Active(anon): 128432 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645512 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119592 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159284 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78036 kB' 'KernelStack: 6736 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.624 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.624 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.625 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.625 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.885 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.885 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.885 06:30:06 -- setup/common.sh@33 -- # echo 1024 00:05:26.885 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:26.885 06:30:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.885 06:30:06 -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.885 06:30:06 -- setup/hugepages.sh@27 -- # local node 00:05:26.885 06:30:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.885 06:30:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:26.885 06:30:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.885 06:30:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.885 06:30:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.885 06:30:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.885 06:30:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.885 06:30:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.885 06:30:06 -- setup/common.sh@18 -- # local node=0 00:05:26.885 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:26.885 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.886 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.886 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.886 06:30:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.886 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.886 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6689080 kB' 'MemUsed: 5552900 kB' 'SwapCached: 0 kB' 'Active: 449436 kB' 'Inactive: 2645512 kB' 'Active(anon): 128536 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645512 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2976892 kB' 'Mapped: 48728 kB' 'AnonPages: 119688 kB' 'Shmem: 10468 kB' 'KernelStack: 6736 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81248 kB' 'Slab: 159284 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # continue 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.886 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.886 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.886 06:30:06 -- setup/common.sh@33 -- # echo 0 00:05:26.886 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:26.886 06:30:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.886 06:30:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.886 06:30:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.887 06:30:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.887 06:30:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:26.887 node0=1024 expecting 1024 00:05:26.887 ************************************ 00:05:26.887 END TEST default_setup 00:05:26.887 ************************************ 00:05:26.887 06:30:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:26.887 00:05:26.887 real 0m0.989s 00:05:26.887 user 0m0.481s 00:05:26.887 sys 0m0.449s 00:05:26.887 06:30:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.887 06:30:06 -- common/autotest_common.sh@10 -- # set +x 00:05:26.887 06:30:06 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:26.887 06:30:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.887 06:30:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.887 06:30:06 -- common/autotest_common.sh@10 -- # set +x 00:05:26.887 ************************************ 00:05:26.887 START TEST per_node_1G_alloc 00:05:26.887 ************************************ 00:05:26.887 06:30:06 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:26.887 06:30:06 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:26.887 06:30:06 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:26.887 06:30:06 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:26.887 06:30:06 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:26.887 06:30:06 -- setup/hugepages.sh@51 -- # shift 00:05:26.887 06:30:06 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:26.887 06:30:06 -- setup/hugepages.sh@52 -- # local node_ids 00:05:26.887 06:30:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:26.887 06:30:06 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:26.887 06:30:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:26.887 06:30:06 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:26.887 06:30:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.887 06:30:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:26.887 06:30:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:26.887 06:30:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.887 06:30:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.887 06:30:06 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:26.887 06:30:06 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:26.887 06:30:06 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:26.887 06:30:06 -- setup/hugepages.sh@73 -- # return 0 00:05:26.887 06:30:06 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:26.887 06:30:06 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:26.887 06:30:06 -- setup/hugepages.sh@146 -- # setup output 00:05:26.887 06:30:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.887 06:30:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.147 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.147 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.147 06:30:06 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:27.147 06:30:06 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:27.147 06:30:06 -- setup/hugepages.sh@89 -- # local node 00:05:27.147 06:30:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.147 06:30:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.147 06:30:06 -- setup/hugepages.sh@92 -- # local surp 00:05:27.147 06:30:06 -- setup/hugepages.sh@93 -- # local resv 00:05:27.147 06:30:06 -- setup/hugepages.sh@94 -- # local anon 00:05:27.147 06:30:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.147 06:30:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.147 06:30:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.147 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:27.147 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:27.147 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.147 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.147 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.147 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.147 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.147 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7735492 kB' 'MemAvailable: 10505600 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449700 kB' 'Inactive: 2645524 kB' 'Active(anon): 128800 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120252 kB' 'Mapped: 48848 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159328 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78080 kB' 'KernelStack: 6712 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.147 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.147 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:06 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.148 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:27.148 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:27.148 06:30:07 -- setup/hugepages.sh@97 -- # anon=0 00:05:27.148 06:30:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.148 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.148 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:27.148 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:27.148 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.148 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.148 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.148 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.148 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.148 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7735492 kB' 'MemAvailable: 10505600 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449740 kB' 'Inactive: 2645524 kB' 'Active(anon): 128840 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119940 kB' 'Mapped: 48848 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159332 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78084 kB' 'KernelStack: 6712 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.148 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.148 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.149 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.149 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.150 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:27.150 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:27.150 06:30:07 -- setup/hugepages.sh@99 -- # surp=0 00:05:27.150 06:30:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.150 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.150 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:27.150 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:27.150 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.150 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.150 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.150 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.150 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.150 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.150 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7735240 kB' 'MemAvailable: 10505348 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449636 kB' 'Inactive: 2645524 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119672 kB' 'Mapped: 48928 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159344 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78096 kB' 'KernelStack: 6768 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.150 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.150 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.412 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:27.412 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:27.412 06:30:07 -- setup/hugepages.sh@100 -- # resv=0 00:05:27.412 nr_hugepages=512 00:05:27.412 06:30:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:27.412 resv_hugepages=0 00:05:27.412 06:30:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.412 surplus_hugepages=0 00:05:27.412 06:30:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.412 anon_hugepages=0 00:05:27.412 06:30:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.412 06:30:07 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:27.412 06:30:07 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:27.412 06:30:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.412 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.412 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:27.412 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:27.412 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.412 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.412 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.412 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.412 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.412 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7738952 kB' 'MemAvailable: 10509060 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449788 kB' 'Inactive: 2645524 kB' 'Active(anon): 128888 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119784 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159340 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78092 kB' 'KernelStack: 6752 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.413 06:30:07 -- setup/common.sh@33 -- # echo 512 00:05:27.413 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:27.413 06:30:07 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:27.413 06:30:07 -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.413 06:30:07 -- setup/hugepages.sh@27 -- # local node 00:05:27.413 06:30:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.413 06:30:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:27.413 06:30:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:27.413 06:30:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.413 06:30:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.413 06:30:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.413 06:30:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.413 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.413 06:30:07 -- setup/common.sh@18 -- # local node=0 00:05:27.413 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:27.413 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.413 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.413 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.413 06:30:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.413 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.413 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7738952 kB' 'MemUsed: 4503028 kB' 'SwapCached: 0 kB' 'Active: 449580 kB' 'Inactive: 2645524 kB' 'Active(anon): 128680 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2976892 kB' 'Mapped: 48668 kB' 'AnonPages: 119868 kB' 'Shmem: 10468 kB' 'KernelStack: 6720 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81248 kB' 'Slab: 159332 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.414 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.414 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.414 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:27.414 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:27.414 06:30:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.414 06:30:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.414 06:30:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.414 06:30:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.414 node0=512 expecting 512 00:05:27.414 06:30:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:27.414 06:30:07 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:27.414 00:05:27.414 real 0m0.519s 00:05:27.414 user 0m0.276s 00:05:27.414 sys 0m0.267s 00:05:27.414 06:30:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.414 06:30:07 -- common/autotest_common.sh@10 -- # set +x 00:05:27.414 ************************************ 00:05:27.414 END TEST per_node_1G_alloc 00:05:27.414 ************************************ 00:05:27.414 06:30:07 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:27.414 06:30:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.414 06:30:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.414 06:30:07 -- common/autotest_common.sh@10 -- # set +x 00:05:27.414 ************************************ 00:05:27.414 START TEST even_2G_alloc 00:05:27.414 ************************************ 00:05:27.414 06:30:07 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:27.414 06:30:07 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:27.414 06:30:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:27.414 06:30:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:27.414 06:30:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:27.414 06:30:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:27.414 06:30:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:27.414 06:30:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:27.414 06:30:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:27.414 06:30:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:27.414 06:30:07 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:27.414 06:30:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:27.414 06:30:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:27.414 06:30:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:27.414 06:30:07 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:27.414 06:30:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.414 06:30:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:27.414 06:30:07 -- setup/hugepages.sh@83 -- # : 0 00:05:27.414 06:30:07 -- setup/hugepages.sh@84 -- # : 0 00:05:27.414 06:30:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.414 06:30:07 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:27.414 06:30:07 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:27.414 06:30:07 -- setup/hugepages.sh@153 -- # setup output 00:05:27.414 06:30:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.414 06:30:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.675 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.675 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.675 06:30:07 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:27.675 06:30:07 -- setup/hugepages.sh@89 -- # local node 00:05:27.675 06:30:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.675 06:30:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.675 06:30:07 -- setup/hugepages.sh@92 -- # local surp 00:05:27.675 06:30:07 -- setup/hugepages.sh@93 -- # local resv 00:05:27.675 06:30:07 -- setup/hugepages.sh@94 -- # local anon 00:05:27.675 06:30:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.675 06:30:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.675 06:30:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.675 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:27.675 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:27.675 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.675 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.675 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.675 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.675 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.675 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6691752 kB' 'MemAvailable: 9461860 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449948 kB' 'Inactive: 2645524 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120148 kB' 'Mapped: 49184 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159328 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78080 kB' 'KernelStack: 6792 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.675 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.675 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:27.938 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:27.938 06:30:07 -- setup/hugepages.sh@97 -- # anon=0 00:05:27.938 06:30:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.938 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.938 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:27.938 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:27.938 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.938 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.938 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.938 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.938 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.938 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6691752 kB' 'MemAvailable: 9461860 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449676 kB' 'Inactive: 2645524 kB' 'Active(anon): 128776 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119952 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159336 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78088 kB' 'KernelStack: 6736 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.938 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.939 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:27.940 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:27.940 06:30:07 -- setup/hugepages.sh@99 -- # surp=0 00:05:27.940 06:30:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.940 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.940 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:27.940 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:27.940 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.940 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.940 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.940 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.940 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.940 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6691500 kB' 'MemAvailable: 9461608 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449284 kB' 'Inactive: 2645524 kB' 'Active(anon): 128384 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159336 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78088 kB' 'KernelStack: 6720 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.940 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.940 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:27.941 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:27.941 06:30:07 -- setup/hugepages.sh@100 -- # resv=0 00:05:27.941 nr_hugepages=1024 00:05:27.941 06:30:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:27.941 resv_hugepages=0 00:05:27.941 06:30:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.941 surplus_hugepages=0 00:05:27.941 06:30:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.941 anon_hugepages=0 00:05:27.941 06:30:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.941 06:30:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.941 06:30:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:27.941 06:30:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.941 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.941 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:27.941 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:27.941 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.941 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.941 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.941 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.941 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.941 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6692020 kB' 'MemAvailable: 9462128 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449284 kB' 'Inactive: 2645524 kB' 'Active(anon): 128384 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81248 kB' 'Slab: 159336 kB' 'SReclaimable: 81248 kB' 'SUnreclaim: 78088 kB' 'KernelStack: 6720 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.941 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.942 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.943 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 06:30:07 -- setup/common.sh@33 -- # echo 1024 00:05:27.944 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:27.944 06:30:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.944 06:30:07 -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.944 06:30:07 -- setup/hugepages.sh@27 -- # local node 00:05:27.944 06:30:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.944 06:30:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:27.944 06:30:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:27.944 06:30:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.944 06:30:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.944 06:30:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.944 06:30:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.944 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.944 06:30:07 -- setup/common.sh@18 -- # local node=0 00:05:27.944 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:27.944 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.944 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.944 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.944 06:30:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.944 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.944 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6691968 kB' 'MemUsed: 5550012 kB' 'SwapCached: 0 kB' 'Active: 449532 kB' 'Inactive: 2645524 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2976892 kB' 'Mapped: 48668 kB' 'AnonPages: 119808 kB' 'Shmem: 10468 kB' 'KernelStack: 6688 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81264 kB' 'Slab: 159348 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # continue 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:27.945 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:27.945 06:30:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.945 06:30:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.945 06:30:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.945 06:30:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.945 06:30:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:27.945 node0=1024 expecting 1024 00:05:27.945 06:30:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:27.945 00:05:27.945 real 0m0.553s 00:05:27.945 user 0m0.278s 00:05:27.945 sys 0m0.278s 00:05:27.945 06:30:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.945 06:30:07 -- common/autotest_common.sh@10 -- # set +x 00:05:27.945 ************************************ 00:05:27.945 END TEST even_2G_alloc 00:05:27.945 ************************************ 00:05:27.945 06:30:07 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:27.945 06:30:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.945 06:30:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.945 06:30:07 -- common/autotest_common.sh@10 -- # set +x 00:05:27.945 ************************************ 00:05:27.945 START TEST odd_alloc 00:05:27.945 ************************************ 00:05:27.945 06:30:07 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:27.945 06:30:07 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:27.945 06:30:07 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:27.945 06:30:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:27.945 06:30:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:27.945 06:30:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:27.945 06:30:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:27.945 06:30:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:27.945 06:30:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:27.945 06:30:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:27.945 06:30:07 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:27.945 06:30:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:27.945 06:30:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:27.945 06:30:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:27.945 06:30:07 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:27.945 06:30:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.945 06:30:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:27.945 06:30:07 -- setup/hugepages.sh@83 -- # : 0 00:05:27.945 06:30:07 -- setup/hugepages.sh@84 -- # : 0 00:05:27.945 06:30:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.945 06:30:07 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:27.945 06:30:07 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:27.945 06:30:07 -- setup/hugepages.sh@160 -- # setup output 00:05:27.945 06:30:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.945 06:30:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:28.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.466 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:28.466 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:28.466 06:30:08 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:28.466 06:30:08 -- setup/hugepages.sh@89 -- # local node 00:05:28.466 06:30:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:28.466 06:30:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:28.466 06:30:08 -- setup/hugepages.sh@92 -- # local surp 00:05:28.466 06:30:08 -- setup/hugepages.sh@93 -- # local resv 00:05:28.466 06:30:08 -- setup/hugepages.sh@94 -- # local anon 00:05:28.466 06:30:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:28.466 06:30:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:28.466 06:30:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:28.466 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:28.466 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:28.466 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:28.467 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.467 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.467 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.467 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.467 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6689720 kB' 'MemAvailable: 9459836 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449588 kB' 'Inactive: 2645524 kB' 'Active(anon): 128688 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119816 kB' 'Mapped: 48788 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159340 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78076 kB' 'KernelStack: 6712 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.467 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.467 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.468 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:28.468 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:28.468 06:30:08 -- setup/hugepages.sh@97 -- # anon=0 00:05:28.468 06:30:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:28.468 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.468 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:28.468 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:28.468 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:28.468 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.468 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.468 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.468 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.468 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6689996 kB' 'MemAvailable: 9460112 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449696 kB' 'Inactive: 2645524 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119920 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159352 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78088 kB' 'KernelStack: 6736 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.468 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.468 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.469 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.469 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.470 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:28.470 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:28.470 06:30:08 -- setup/hugepages.sh@99 -- # surp=0 00:05:28.470 06:30:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:28.470 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:28.470 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:28.470 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:28.470 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:28.470 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.470 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.470 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.470 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.470 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6690548 kB' 'MemAvailable: 9460664 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449660 kB' 'Inactive: 2645524 kB' 'Active(anon): 128760 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119884 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159352 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78088 kB' 'KernelStack: 6720 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.470 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.470 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.471 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.471 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:28.471 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:28.471 06:30:08 -- setup/hugepages.sh@100 -- # resv=0 00:05:28.471 nr_hugepages=1025 00:05:28.471 06:30:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:28.471 resv_hugepages=0 00:05:28.471 06:30:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:28.471 surplus_hugepages=0 00:05:28.471 06:30:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:28.471 06:30:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:28.471 anon_hugepages=0 00:05:28.471 06:30:08 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:28.471 06:30:08 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:28.471 06:30:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:28.471 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:28.471 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:28.471 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:28.471 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:28.471 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.471 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.471 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.471 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.471 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.471 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6690980 kB' 'MemAvailable: 9461096 kB' 'Buffers: 2436 kB' 'Cached: 2974456 kB' 'SwapCached: 0 kB' 'Active: 449688 kB' 'Inactive: 2645524 kB' 'Active(anon): 128788 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119920 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159352 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78088 kB' 'KernelStack: 6736 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.472 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.472 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.473 06:30:08 -- setup/common.sh@33 -- # echo 1025 00:05:28.473 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:28.473 06:30:08 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:28.473 06:30:08 -- setup/hugepages.sh@112 -- # get_nodes 00:05:28.473 06:30:08 -- setup/hugepages.sh@27 -- # local node 00:05:28.473 06:30:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.473 06:30:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:28.473 06:30:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:28.473 06:30:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:28.473 06:30:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.473 06:30:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.473 06:30:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:28.473 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.473 06:30:08 -- setup/common.sh@18 -- # local node=0 00:05:28.473 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:28.473 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:28.473 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.473 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:28.473 06:30:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:28.473 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.473 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6690980 kB' 'MemUsed: 5551000 kB' 'SwapCached: 0 kB' 'Active: 449388 kB' 'Inactive: 2645524 kB' 'Active(anon): 128488 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2976892 kB' 'Mapped: 48668 kB' 'AnonPages: 119632 kB' 'Shmem: 10468 kB' 'KernelStack: 6720 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81264 kB' 'Slab: 159352 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.473 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.473 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # continue 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:28.474 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:28.474 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.474 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:28.474 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:28.474 06:30:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.474 06:30:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.474 06:30:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.474 06:30:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.474 node0=1025 expecting 1025 00:05:28.474 06:30:08 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:28.474 06:30:08 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:28.474 00:05:28.474 real 0m0.527s 00:05:28.474 user 0m0.265s 00:05:28.474 sys 0m0.294s 00:05:28.474 06:30:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.475 06:30:08 -- common/autotest_common.sh@10 -- # set +x 00:05:28.475 ************************************ 00:05:28.475 END TEST odd_alloc 00:05:28.475 ************************************ 00:05:28.475 06:30:08 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:28.475 06:30:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.475 06:30:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.475 06:30:08 -- common/autotest_common.sh@10 -- # set +x 00:05:28.475 ************************************ 00:05:28.475 START TEST custom_alloc 00:05:28.475 ************************************ 00:05:28.475 06:30:08 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:28.475 06:30:08 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:28.475 06:30:08 -- setup/hugepages.sh@169 -- # local node 00:05:28.475 06:30:08 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:28.475 06:30:08 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:28.475 06:30:08 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:28.475 06:30:08 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:28.475 06:30:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:28.475 06:30:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:28.475 06:30:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:28.475 06:30:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:28.475 06:30:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:28.475 06:30:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:28.475 06:30:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.475 06:30:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:28.475 06:30:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:28.475 06:30:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.475 06:30:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.475 06:30:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:28.475 06:30:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:28.475 06:30:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.475 06:30:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:28.475 06:30:08 -- setup/hugepages.sh@83 -- # : 0 00:05:28.475 06:30:08 -- setup/hugepages.sh@84 -- # : 0 00:05:28.475 06:30:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.475 06:30:08 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:28.475 06:30:08 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:28.475 06:30:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:28.475 06:30:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:28.475 06:30:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:28.475 06:30:08 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:28.475 06:30:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:28.475 06:30:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.475 06:30:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:28.475 06:30:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:28.475 06:30:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.475 06:30:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.475 06:30:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:28.475 06:30:08 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:28.475 06:30:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:28.475 06:30:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:28.475 06:30:08 -- setup/hugepages.sh@78 -- # return 0 00:05:28.475 06:30:08 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:28.475 06:30:08 -- setup/hugepages.sh@187 -- # setup output 00:05:28.475 06:30:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.475 06:30:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.047 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.047 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.047 06:30:08 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:29.047 06:30:08 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:29.047 06:30:08 -- setup/hugepages.sh@89 -- # local node 00:05:29.047 06:30:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:29.047 06:30:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:29.047 06:30:08 -- setup/hugepages.sh@92 -- # local surp 00:05:29.047 06:30:08 -- setup/hugepages.sh@93 -- # local resv 00:05:29.047 06:30:08 -- setup/hugepages.sh@94 -- # local anon 00:05:29.047 06:30:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:29.047 06:30:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:29.047 06:30:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:29.047 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:29.047 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:29.047 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.047 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.047 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.047 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.047 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.047 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7745696 kB' 'MemAvailable: 10515816 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 449712 kB' 'Inactive: 2645528 kB' 'Active(anon): 128812 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119924 kB' 'Mapped: 48856 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159336 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78072 kB' 'KernelStack: 6728 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.047 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.048 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:29.048 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:29.048 06:30:08 -- setup/hugepages.sh@97 -- # anon=0 00:05:29.048 06:30:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:29.048 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.048 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:29.048 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:29.048 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.048 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.048 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.048 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.048 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.048 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7745836 kB' 'MemAvailable: 10515956 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 449752 kB' 'Inactive: 2645528 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119972 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159336 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78072 kB' 'KernelStack: 6720 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.048 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.049 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:29.049 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:29.049 06:30:08 -- setup/hugepages.sh@99 -- # surp=0 00:05:29.049 06:30:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:29.049 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:29.049 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:29.049 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:29.049 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.049 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.049 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.049 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.049 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.050 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7745584 kB' 'MemAvailable: 10515704 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 449440 kB' 'Inactive: 2645528 kB' 'Active(anon): 128540 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119684 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159328 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78064 kB' 'KernelStack: 6736 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.050 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.051 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:29.051 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:29.051 06:30:08 -- setup/hugepages.sh@100 -- # resv=0 00:05:29.051 nr_hugepages=512 00:05:29.051 06:30:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:29.051 resv_hugepages=0 00:05:29.051 06:30:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:29.051 surplus_hugepages=0 00:05:29.051 06:30:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:29.051 anon_hugepages=0 00:05:29.051 06:30:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:29.051 06:30:08 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:29.051 06:30:08 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:29.051 06:30:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:29.051 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:29.051 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:29.051 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:29.051 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.051 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.051 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.051 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.051 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.051 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7746244 kB' 'MemAvailable: 10516364 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 449684 kB' 'Inactive: 2645528 kB' 'Active(anon): 128784 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119928 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159328 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78064 kB' 'KernelStack: 6736 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.051 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.052 06:30:08 -- setup/common.sh@33 -- # echo 512 00:05:29.052 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:29.052 06:30:08 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:29.052 06:30:08 -- setup/hugepages.sh@112 -- # get_nodes 00:05:29.052 06:30:08 -- setup/hugepages.sh@27 -- # local node 00:05:29.052 06:30:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.052 06:30:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:29.052 06:30:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:29.052 06:30:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:29.052 06:30:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.052 06:30:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.052 06:30:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:29.052 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.052 06:30:08 -- setup/common.sh@18 -- # local node=0 00:05:29.052 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:29.052 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.052 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.052 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:29.052 06:30:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:29.052 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.052 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7746244 kB' 'MemUsed: 4495736 kB' 'SwapCached: 0 kB' 'Active: 449664 kB' 'Inactive: 2645528 kB' 'Active(anon): 128764 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2976896 kB' 'Mapped: 48668 kB' 'AnonPages: 119872 kB' 'Shmem: 10468 kB' 'KernelStack: 6720 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81264 kB' 'Slab: 159328 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # continue 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:29.053 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:29.053 06:30:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.053 06:30:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.053 06:30:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.053 06:30:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.053 node0=512 expecting 512 00:05:29.053 06:30:08 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:29.053 06:30:08 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:29.053 00:05:29.053 real 0m0.548s 00:05:29.053 user 0m0.257s 00:05:29.053 sys 0m0.299s 00:05:29.053 06:30:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.053 06:30:08 -- common/autotest_common.sh@10 -- # set +x 00:05:29.053 ************************************ 00:05:29.053 END TEST custom_alloc 00:05:29.053 ************************************ 00:05:29.312 06:30:08 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:29.312 06:30:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.312 06:30:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.312 06:30:08 -- common/autotest_common.sh@10 -- # set +x 00:05:29.312 ************************************ 00:05:29.312 START TEST no_shrink_alloc 00:05:29.312 ************************************ 00:05:29.312 06:30:08 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:29.312 06:30:08 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:29.312 06:30:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:29.312 06:30:08 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:29.312 06:30:08 -- setup/hugepages.sh@51 -- # shift 00:05:29.312 06:30:08 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:29.312 06:30:08 -- setup/hugepages.sh@52 -- # local node_ids 00:05:29.312 06:30:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:29.312 06:30:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:29.312 06:30:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:29.312 06:30:08 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:29.312 06:30:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:29.312 06:30:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:29.312 06:30:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:29.312 06:30:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:29.312 06:30:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:29.312 06:30:08 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:29.312 06:30:08 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:29.312 06:30:08 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:29.312 06:30:08 -- setup/hugepages.sh@73 -- # return 0 00:05:29.312 06:30:08 -- setup/hugepages.sh@198 -- # setup output 00:05:29.312 06:30:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.312 06:30:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.573 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.573 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.573 06:30:09 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:29.573 06:30:09 -- setup/hugepages.sh@89 -- # local node 00:05:29.573 06:30:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:29.573 06:30:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:29.573 06:30:09 -- setup/hugepages.sh@92 -- # local surp 00:05:29.573 06:30:09 -- setup/hugepages.sh@93 -- # local resv 00:05:29.573 06:30:09 -- setup/hugepages.sh@94 -- # local anon 00:05:29.573 06:30:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:29.573 06:30:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:29.573 06:30:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:29.573 06:30:09 -- setup/common.sh@18 -- # local node= 00:05:29.573 06:30:09 -- setup/common.sh@19 -- # local var val 00:05:29.573 06:30:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.573 06:30:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.573 06:30:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.573 06:30:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.573 06:30:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.573 06:30:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6705128 kB' 'MemAvailable: 9475248 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 449908 kB' 'Inactive: 2645528 kB' 'Active(anon): 129008 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120216 kB' 'Mapped: 48848 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159252 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 77988 kB' 'KernelStack: 6744 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.573 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.573 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.574 06:30:09 -- setup/common.sh@33 -- # echo 0 00:05:29.574 06:30:09 -- setup/common.sh@33 -- # return 0 00:05:29.574 06:30:09 -- setup/hugepages.sh@97 -- # anon=0 00:05:29.574 06:30:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:29.574 06:30:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.574 06:30:09 -- setup/common.sh@18 -- # local node= 00:05:29.574 06:30:09 -- setup/common.sh@19 -- # local var val 00:05:29.574 06:30:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.574 06:30:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.574 06:30:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.574 06:30:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.574 06:30:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.574 06:30:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6705276 kB' 'MemAvailable: 9475396 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 449712 kB' 'Inactive: 2645528 kB' 'Active(anon): 128812 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119952 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159252 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 77988 kB' 'KernelStack: 6704 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.574 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.574 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.575 06:30:09 -- setup/common.sh@33 -- # echo 0 00:05:29.575 06:30:09 -- setup/common.sh@33 -- # return 0 00:05:29.575 06:30:09 -- setup/hugepages.sh@99 -- # surp=0 00:05:29.575 06:30:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:29.575 06:30:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:29.575 06:30:09 -- setup/common.sh@18 -- # local node= 00:05:29.575 06:30:09 -- setup/common.sh@19 -- # local var val 00:05:29.575 06:30:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.575 06:30:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.575 06:30:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.575 06:30:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.575 06:30:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.575 06:30:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6705276 kB' 'MemAvailable: 9475396 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 449504 kB' 'Inactive: 2645528 kB' 'Active(anon): 128604 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119740 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159252 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 77988 kB' 'KernelStack: 6756 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.575 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.575 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.576 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.576 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.576 06:30:09 -- setup/common.sh@33 -- # echo 0 00:05:29.576 06:30:09 -- setup/common.sh@33 -- # return 0 00:05:29.576 06:30:09 -- setup/hugepages.sh@100 -- # resv=0 00:05:29.576 nr_hugepages=1024 00:05:29.576 06:30:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:29.576 resv_hugepages=0 00:05:29.576 06:30:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:29.576 surplus_hugepages=0 00:05:29.576 06:30:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:29.576 anon_hugepages=0 00:05:29.576 06:30:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:29.576 06:30:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.576 06:30:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:29.576 06:30:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:29.576 06:30:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:29.576 06:30:09 -- setup/common.sh@18 -- # local node= 00:05:29.576 06:30:09 -- setup/common.sh@19 -- # local var val 00:05:29.577 06:30:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.577 06:30:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.577 06:30:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.577 06:30:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.577 06:30:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.577 06:30:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6704772 kB' 'MemAvailable: 9474892 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 449460 kB' 'Inactive: 2645528 kB' 'Active(anon): 128560 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119956 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159248 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 77984 kB' 'KernelStack: 6740 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.577 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.577 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.578 06:30:09 -- setup/common.sh@33 -- # echo 1024 00:05:29.578 06:30:09 -- setup/common.sh@33 -- # return 0 00:05:29.578 06:30:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.578 06:30:09 -- setup/hugepages.sh@112 -- # get_nodes 00:05:29.578 06:30:09 -- setup/hugepages.sh@27 -- # local node 00:05:29.578 06:30:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.578 06:30:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:29.578 06:30:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:29.578 06:30:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:29.578 06:30:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.578 06:30:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.578 06:30:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:29.578 06:30:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.578 06:30:09 -- setup/common.sh@18 -- # local node=0 00:05:29.578 06:30:09 -- setup/common.sh@19 -- # local var val 00:05:29.578 06:30:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:29.578 06:30:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.578 06:30:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:29.578 06:30:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:29.578 06:30:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.578 06:30:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6704772 kB' 'MemUsed: 5537208 kB' 'SwapCached: 0 kB' 'Active: 449428 kB' 'Inactive: 2645528 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2976896 kB' 'Mapped: 48728 kB' 'AnonPages: 119912 kB' 'Shmem: 10468 kB' 'KernelStack: 6736 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81264 kB' 'Slab: 159284 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.578 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.578 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.579 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.579 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.579 06:30:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.579 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.579 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.579 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.579 06:30:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.579 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.579 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.579 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # continue 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:29.837 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:29.837 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.837 06:30:09 -- setup/common.sh@33 -- # echo 0 00:05:29.837 06:30:09 -- setup/common.sh@33 -- # return 0 00:05:29.837 06:30:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.837 06:30:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.837 06:30:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.837 06:30:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.837 node0=1024 expecting 1024 00:05:29.837 06:30:09 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:29.837 06:30:09 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:29.837 06:30:09 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:29.837 06:30:09 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:29.837 06:30:09 -- setup/hugepages.sh@202 -- # setup output 00:05:29.837 06:30:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.837 06:30:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.099 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.099 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.099 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:30.099 06:30:09 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:30.099 06:30:09 -- setup/hugepages.sh@89 -- # local node 00:05:30.099 06:30:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:30.099 06:30:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:30.099 06:30:09 -- setup/hugepages.sh@92 -- # local surp 00:05:30.099 06:30:09 -- setup/hugepages.sh@93 -- # local resv 00:05:30.099 06:30:09 -- setup/hugepages.sh@94 -- # local anon 00:05:30.099 06:30:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:30.099 06:30:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:30.099 06:30:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:30.099 06:30:09 -- setup/common.sh@18 -- # local node= 00:05:30.099 06:30:09 -- setup/common.sh@19 -- # local var val 00:05:30.099 06:30:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:30.099 06:30:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.099 06:30:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.099 06:30:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.099 06:30:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.099 06:30:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.099 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6709084 kB' 'MemAvailable: 9479204 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 445936 kB' 'Inactive: 2645528 kB' 'Active(anon): 125036 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 116144 kB' 'Mapped: 48204 kB' 'Shmem: 10468 kB' 'KReclaimable: 81264 kB' 'Slab: 159284 kB' 'SReclaimable: 81264 kB' 'SUnreclaim: 78020 kB' 'KernelStack: 6760 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.100 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.100 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.101 06:30:09 -- setup/common.sh@33 -- # echo 0 00:05:30.101 06:30:09 -- setup/common.sh@33 -- # return 0 00:05:30.101 06:30:09 -- setup/hugepages.sh@97 -- # anon=0 00:05:30.101 06:30:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:30.101 06:30:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.101 06:30:09 -- setup/common.sh@18 -- # local node= 00:05:30.101 06:30:09 -- setup/common.sh@19 -- # local var val 00:05:30.101 06:30:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:30.101 06:30:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.101 06:30:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.101 06:30:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.101 06:30:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.101 06:30:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6709024 kB' 'MemAvailable: 9479128 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 445572 kB' 'Inactive: 2645528 kB' 'Active(anon): 124672 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 115768 kB' 'Mapped: 48108 kB' 'Shmem: 10468 kB' 'KReclaimable: 81232 kB' 'Slab: 159164 kB' 'SReclaimable: 81232 kB' 'SUnreclaim: 77932 kB' 'KernelStack: 6692 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.101 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.101 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.102 06:30:09 -- setup/common.sh@33 -- # echo 0 00:05:30.102 06:30:09 -- setup/common.sh@33 -- # return 0 00:05:30.102 06:30:09 -- setup/hugepages.sh@99 -- # surp=0 00:05:30.102 06:30:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:30.102 06:30:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:30.102 06:30:09 -- setup/common.sh@18 -- # local node= 00:05:30.102 06:30:09 -- setup/common.sh@19 -- # local var val 00:05:30.102 06:30:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:30.102 06:30:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.102 06:30:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.102 06:30:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.102 06:30:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.102 06:30:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6709024 kB' 'MemAvailable: 9479128 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 445124 kB' 'Inactive: 2645528 kB' 'Active(anon): 124224 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 115580 kB' 'Mapped: 47988 kB' 'Shmem: 10468 kB' 'KReclaimable: 81232 kB' 'Slab: 159156 kB' 'SReclaimable: 81232 kB' 'SUnreclaim: 77924 kB' 'KernelStack: 6628 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.102 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.103 06:30:09 -- setup/common.sh@33 -- # echo 0 00:05:30.103 06:30:09 -- setup/common.sh@33 -- # return 0 00:05:30.103 06:30:09 -- setup/hugepages.sh@100 -- # resv=0 00:05:30.103 nr_hugepages=1024 00:05:30.103 06:30:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:30.103 resv_hugepages=0 00:05:30.103 06:30:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:30.103 surplus_hugepages=0 00:05:30.103 anon_hugepages=0 00:05:30.103 06:30:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:30.103 06:30:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:30.103 06:30:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:30.103 06:30:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:30.103 06:30:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:30.103 06:30:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:30.103 06:30:09 -- setup/common.sh@18 -- # local node= 00:05:30.103 06:30:09 -- setup/common.sh@19 -- # local var val 00:05:30.103 06:30:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:30.103 06:30:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.103 06:30:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.103 06:30:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.103 06:30:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.103 06:30:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 06:30:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6709024 kB' 'MemAvailable: 9479128 kB' 'Buffers: 2436 kB' 'Cached: 2974460 kB' 'SwapCached: 0 kB' 'Active: 445224 kB' 'Inactive: 2645528 kB' 'Active(anon): 124324 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 115744 kB' 'Mapped: 47988 kB' 'Shmem: 10468 kB' 'KReclaimable: 81232 kB' 'Slab: 159152 kB' 'SReclaimable: 81232 kB' 'SUnreclaim: 77920 kB' 'KernelStack: 6676 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:05:30.103 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.103 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.104 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.104 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:09 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.105 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.105 06:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.105 06:30:10 -- setup/common.sh@33 -- # echo 1024 00:05:30.105 06:30:10 -- setup/common.sh@33 -- # return 0 00:05:30.105 06:30:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:30.105 06:30:10 -- setup/hugepages.sh@112 -- # get_nodes 00:05:30.105 06:30:10 -- setup/hugepages.sh@27 -- # local node 00:05:30.105 06:30:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:30.105 06:30:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:30.105 06:30:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:30.105 06:30:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:30.105 06:30:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:30.105 06:30:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:30.105 06:30:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:30.105 06:30:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.105 06:30:10 -- setup/common.sh@18 -- # local node=0 00:05:30.105 06:30:10 -- setup/common.sh@19 -- # local var val 00:05:30.105 06:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:30.105 06:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.105 06:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:30.105 06:30:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:30.364 06:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.364 06:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6709024 kB' 'MemUsed: 5532956 kB' 'SwapCached: 0 kB' 'Active: 445168 kB' 'Inactive: 2645528 kB' 'Active(anon): 124268 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2976896 kB' 'Mapped: 47988 kB' 'AnonPages: 115660 kB' 'Shmem: 10468 kB' 'KernelStack: 6660 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81232 kB' 'Slab: 159128 kB' 'SReclaimable: 81232 kB' 'SUnreclaim: 77896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.364 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.364 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 06:30:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 06:30:10 -- setup/common.sh@32 -- # continue 00:05:30.365 06:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 06:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 06:30:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 06:30:10 -- setup/common.sh@33 -- # echo 0 00:05:30.365 06:30:10 -- setup/common.sh@33 -- # return 0 00:05:30.365 06:30:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:30.365 node0=1024 expecting 1024 00:05:30.365 06:30:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:30.365 06:30:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:30.365 06:30:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:30.365 06:30:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:30.365 06:30:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:30.365 00:05:30.365 real 0m1.070s 00:05:30.365 user 0m0.535s 00:05:30.365 sys 0m0.579s 00:05:30.365 06:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.365 06:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 ************************************ 00:05:30.365 END TEST no_shrink_alloc 00:05:30.365 ************************************ 00:05:30.365 06:30:10 -- setup/hugepages.sh@217 -- # clear_hp 00:05:30.365 06:30:10 -- setup/hugepages.sh@37 -- # local node hp 00:05:30.365 06:30:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:30.365 06:30:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:30.365 06:30:10 -- setup/hugepages.sh@41 -- # echo 0 00:05:30.365 06:30:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:30.365 06:30:10 -- setup/hugepages.sh@41 -- # echo 0 00:05:30.365 06:30:10 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:30.365 06:30:10 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:30.365 00:05:30.365 real 0m4.632s 00:05:30.365 user 0m2.237s 00:05:30.365 sys 0m2.428s 00:05:30.365 ************************************ 00:05:30.365 END TEST hugepages 00:05:30.365 ************************************ 00:05:30.365 06:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.365 06:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 06:30:10 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:30.365 06:30:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.365 06:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.365 06:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 ************************************ 00:05:30.365 START TEST driver 00:05:30.365 ************************************ 00:05:30.365 06:30:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:30.365 * Looking for test storage... 00:05:30.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:30.365 06:30:10 -- setup/driver.sh@68 -- # setup reset 00:05:30.365 06:30:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.365 06:30:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.934 06:30:10 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:30.934 06:30:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.934 06:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.934 06:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:30.934 ************************************ 00:05:30.934 START TEST guess_driver 00:05:30.934 ************************************ 00:05:30.934 06:30:10 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:30.934 06:30:10 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:30.934 06:30:10 -- setup/driver.sh@47 -- # local fail=0 00:05:30.934 06:30:10 -- setup/driver.sh@49 -- # pick_driver 00:05:30.934 06:30:10 -- setup/driver.sh@36 -- # vfio 00:05:30.934 06:30:10 -- setup/driver.sh@21 -- # local iommu_grups 00:05:30.934 06:30:10 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:30.934 06:30:10 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:30.934 06:30:10 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:30.934 06:30:10 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:30.934 06:30:10 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:30.934 06:30:10 -- setup/driver.sh@32 -- # return 1 00:05:30.934 06:30:10 -- setup/driver.sh@38 -- # uio 00:05:30.934 06:30:10 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:30.934 06:30:10 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:30.934 06:30:10 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:30.934 06:30:10 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:30.934 06:30:10 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:30.934 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:30.934 06:30:10 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:30.934 06:30:10 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:30.934 06:30:10 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:30.934 06:30:10 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:30.934 Looking for driver=uio_pci_generic 00:05:30.934 06:30:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:30.934 06:30:10 -- setup/driver.sh@45 -- # setup output config 00:05:30.934 06:30:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.934 06:30:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.499 06:30:11 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:31.499 06:30:11 -- setup/driver.sh@58 -- # continue 00:05:31.499 06:30:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:31.757 06:30:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:31.757 06:30:11 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:31.757 06:30:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:31.757 06:30:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:31.757 06:30:11 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:31.757 06:30:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:31.757 06:30:11 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:31.757 06:30:11 -- setup/driver.sh@65 -- # setup reset 00:05:31.757 06:30:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:31.757 06:30:11 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.323 00:05:32.323 real 0m1.393s 00:05:32.323 user 0m0.517s 00:05:32.323 sys 0m0.858s 00:05:32.323 06:30:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.323 ************************************ 00:05:32.323 END TEST guess_driver 00:05:32.323 ************************************ 00:05:32.323 06:30:12 -- common/autotest_common.sh@10 -- # set +x 00:05:32.323 ************************************ 00:05:32.323 END TEST driver 00:05:32.323 ************************************ 00:05:32.323 00:05:32.323 real 0m2.067s 00:05:32.323 user 0m0.757s 00:05:32.323 sys 0m1.339s 00:05:32.323 06:30:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.323 06:30:12 -- common/autotest_common.sh@10 -- # set +x 00:05:32.581 06:30:12 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:32.581 06:30:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.581 06:30:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.581 06:30:12 -- common/autotest_common.sh@10 -- # set +x 00:05:32.581 ************************************ 00:05:32.581 START TEST devices 00:05:32.581 ************************************ 00:05:32.581 06:30:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:32.581 * Looking for test storage... 00:05:32.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:32.581 06:30:12 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:32.581 06:30:12 -- setup/devices.sh@192 -- # setup reset 00:05:32.581 06:30:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:32.581 06:30:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:33.148 06:30:13 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:33.148 06:30:13 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:33.148 06:30:13 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:33.148 06:30:13 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:33.148 06:30:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:33.148 06:30:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:33.148 06:30:13 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:33.148 06:30:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:33.148 06:30:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:33.148 06:30:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:33.148 06:30:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:33.148 06:30:13 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:33.148 06:30:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:33.148 06:30:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:33.148 06:30:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:33.148 06:30:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:33.148 06:30:13 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:33.148 06:30:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:33.148 06:30:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:33.148 06:30:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:33.148 06:30:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:33.148 06:30:13 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:33.148 06:30:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:33.148 06:30:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:33.148 06:30:13 -- setup/devices.sh@196 -- # blocks=() 00:05:33.148 06:30:13 -- setup/devices.sh@196 -- # declare -a blocks 00:05:33.148 06:30:13 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:33.148 06:30:13 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:33.148 06:30:13 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:33.148 06:30:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:33.148 06:30:13 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:33.148 06:30:13 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:33.406 06:30:13 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:33.406 06:30:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:33.406 06:30:13 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:33.406 06:30:13 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:33.406 06:30:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:33.406 No valid GPT data, bailing 00:05:33.406 06:30:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:33.406 06:30:13 -- scripts/common.sh@393 -- # pt= 00:05:33.406 06:30:13 -- scripts/common.sh@394 -- # return 1 00:05:33.406 06:30:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:33.406 06:30:13 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:33.406 06:30:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:33.406 06:30:13 -- setup/common.sh@80 -- # echo 5368709120 00:05:33.406 06:30:13 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:33.406 06:30:13 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:33.406 06:30:13 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:33.406 06:30:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:33.406 06:30:13 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:33.406 06:30:13 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:33.406 06:30:13 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:33.406 06:30:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:33.406 06:30:13 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:33.406 06:30:13 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:33.406 06:30:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:33.406 No valid GPT data, bailing 00:05:33.406 06:30:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:33.406 06:30:13 -- scripts/common.sh@393 -- # pt= 00:05:33.406 06:30:13 -- scripts/common.sh@394 -- # return 1 00:05:33.406 06:30:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:33.406 06:30:13 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:33.406 06:30:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:33.406 06:30:13 -- setup/common.sh@80 -- # echo 4294967296 00:05:33.406 06:30:13 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:33.406 06:30:13 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:33.406 06:30:13 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:33.406 06:30:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:33.406 06:30:13 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:33.406 06:30:13 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:33.406 06:30:13 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:33.406 06:30:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:33.406 06:30:13 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:33.406 06:30:13 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:33.406 06:30:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:33.406 No valid GPT data, bailing 00:05:33.406 06:30:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:33.406 06:30:13 -- scripts/common.sh@393 -- # pt= 00:05:33.406 06:30:13 -- scripts/common.sh@394 -- # return 1 00:05:33.406 06:30:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:33.406 06:30:13 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:33.406 06:30:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:33.406 06:30:13 -- setup/common.sh@80 -- # echo 4294967296 00:05:33.406 06:30:13 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:33.406 06:30:13 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:33.406 06:30:13 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:33.406 06:30:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:33.406 06:30:13 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:33.406 06:30:13 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:33.406 06:30:13 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:33.406 06:30:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:33.406 06:30:13 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:33.406 06:30:13 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:33.406 06:30:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:33.664 No valid GPT data, bailing 00:05:33.664 06:30:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:33.664 06:30:13 -- scripts/common.sh@393 -- # pt= 00:05:33.664 06:30:13 -- scripts/common.sh@394 -- # return 1 00:05:33.664 06:30:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:33.664 06:30:13 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:33.664 06:30:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:33.664 06:30:13 -- setup/common.sh@80 -- # echo 4294967296 00:05:33.664 06:30:13 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:33.664 06:30:13 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:33.664 06:30:13 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:33.664 06:30:13 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:33.664 06:30:13 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:33.664 06:30:13 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:33.664 06:30:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.664 06:30:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.664 06:30:13 -- common/autotest_common.sh@10 -- # set +x 00:05:33.664 ************************************ 00:05:33.664 START TEST nvme_mount 00:05:33.664 ************************************ 00:05:33.664 06:30:13 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:33.664 06:30:13 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:33.664 06:30:13 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:33.664 06:30:13 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.664 06:30:13 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:33.664 06:30:13 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:33.664 06:30:13 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:33.664 06:30:13 -- setup/common.sh@40 -- # local part_no=1 00:05:33.664 06:30:13 -- setup/common.sh@41 -- # local size=1073741824 00:05:33.664 06:30:13 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:33.664 06:30:13 -- setup/common.sh@44 -- # parts=() 00:05:33.664 06:30:13 -- setup/common.sh@44 -- # local parts 00:05:33.664 06:30:13 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:33.664 06:30:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.664 06:30:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:33.664 06:30:13 -- setup/common.sh@46 -- # (( part++ )) 00:05:33.664 06:30:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.664 06:30:13 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:33.664 06:30:13 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:33.664 06:30:13 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:34.598 Creating new GPT entries in memory. 00:05:34.598 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:34.598 other utilities. 00:05:34.598 06:30:14 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:34.598 06:30:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:34.598 06:30:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:34.598 06:30:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:34.598 06:30:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:35.581 Creating new GPT entries in memory. 00:05:35.581 The operation has completed successfully. 00:05:35.581 06:30:15 -- setup/common.sh@57 -- # (( part++ )) 00:05:35.581 06:30:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.581 06:30:15 -- setup/common.sh@62 -- # wait 64117 00:05:35.581 06:30:15 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.581 06:30:15 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:35.581 06:30:15 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.581 06:30:15 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:35.581 06:30:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:35.581 06:30:15 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.839 06:30:15 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:35.839 06:30:15 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:35.839 06:30:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:35.839 06:30:15 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.839 06:30:15 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:35.839 06:30:15 -- setup/devices.sh@53 -- # local found=0 00:05:35.839 06:30:15 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:35.839 06:30:15 -- setup/devices.sh@56 -- # : 00:05:35.839 06:30:15 -- setup/devices.sh@59 -- # local pci status 00:05:35.839 06:30:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.839 06:30:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:35.839 06:30:15 -- setup/devices.sh@47 -- # setup output config 00:05:35.839 06:30:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.839 06:30:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:35.839 06:30:15 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.839 06:30:15 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:35.839 06:30:15 -- setup/devices.sh@63 -- # found=1 00:05:35.839 06:30:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.839 06:30:15 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.839 06:30:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.098 06:30:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.098 06:30:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.366 06:30:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.366 06:30:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.366 06:30:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:36.366 06:30:16 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:36.366 06:30:16 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.366 06:30:16 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.366 06:30:16 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:36.366 06:30:16 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:36.366 06:30:16 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.366 06:30:16 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.366 06:30:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:36.366 06:30:16 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:36.366 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:36.366 06:30:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:36.366 06:30:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:36.654 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:36.654 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:36.654 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:36.654 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:36.654 06:30:16 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:36.654 06:30:16 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:36.654 06:30:16 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.654 06:30:16 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:36.654 06:30:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:36.654 06:30:16 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.654 06:30:16 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:36.654 06:30:16 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:36.654 06:30:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:36.654 06:30:16 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.654 06:30:16 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:36.654 06:30:16 -- setup/devices.sh@53 -- # local found=0 00:05:36.654 06:30:16 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.654 06:30:16 -- setup/devices.sh@56 -- # : 00:05:36.654 06:30:16 -- setup/devices.sh@59 -- # local pci status 00:05:36.654 06:30:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.654 06:30:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:36.654 06:30:16 -- setup/devices.sh@47 -- # setup output config 00:05:36.654 06:30:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.654 06:30:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:36.931 06:30:16 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.931 06:30:16 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:36.931 06:30:16 -- setup/devices.sh@63 -- # found=1 00:05:36.931 06:30:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.931 06:30:16 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.931 06:30:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.189 06:30:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.189 06:30:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.189 06:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.189 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.448 06:30:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.448 06:30:17 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:37.448 06:30:17 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.448 06:30:17 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:37.448 06:30:17 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:37.448 06:30:17 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.448 06:30:17 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:37.448 06:30:17 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:37.448 06:30:17 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:37.448 06:30:17 -- setup/devices.sh@50 -- # local mount_point= 00:05:37.448 06:30:17 -- setup/devices.sh@51 -- # local test_file= 00:05:37.448 06:30:17 -- setup/devices.sh@53 -- # local found=0 00:05:37.448 06:30:17 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:37.448 06:30:17 -- setup/devices.sh@59 -- # local pci status 00:05:37.448 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.448 06:30:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:37.448 06:30:17 -- setup/devices.sh@47 -- # setup output config 00:05:37.448 06:30:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.448 06:30:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:37.707 06:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.707 06:30:17 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:37.707 06:30:17 -- setup/devices.sh@63 -- # found=1 00:05:37.707 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.707 06:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.707 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.965 06:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.965 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.965 06:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.965 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.223 06:30:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:38.223 06:30:17 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:38.223 06:30:17 -- setup/devices.sh@68 -- # return 0 00:05:38.223 06:30:17 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:38.223 06:30:17 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.223 06:30:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:38.223 06:30:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:38.223 06:30:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:38.223 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:38.223 00:05:38.223 real 0m4.517s 00:05:38.223 user 0m1.018s 00:05:38.223 sys 0m1.211s 00:05:38.223 06:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.223 06:30:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.223 ************************************ 00:05:38.223 END TEST nvme_mount 00:05:38.223 ************************************ 00:05:38.223 06:30:17 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:38.223 06:30:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.223 06:30:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.223 06:30:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.223 ************************************ 00:05:38.223 START TEST dm_mount 00:05:38.223 ************************************ 00:05:38.223 06:30:17 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:38.223 06:30:17 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:38.223 06:30:17 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:38.223 06:30:17 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:38.223 06:30:17 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:38.223 06:30:17 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:38.223 06:30:17 -- setup/common.sh@40 -- # local part_no=2 00:05:38.223 06:30:17 -- setup/common.sh@41 -- # local size=1073741824 00:05:38.223 06:30:17 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:38.223 06:30:17 -- setup/common.sh@44 -- # parts=() 00:05:38.223 06:30:17 -- setup/common.sh@44 -- # local parts 00:05:38.223 06:30:17 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:38.223 06:30:17 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:38.223 06:30:17 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:38.223 06:30:17 -- setup/common.sh@46 -- # (( part++ )) 00:05:38.223 06:30:17 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:38.223 06:30:17 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:38.223 06:30:17 -- setup/common.sh@46 -- # (( part++ )) 00:05:38.223 06:30:17 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:38.223 06:30:17 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:38.223 06:30:17 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:38.223 06:30:17 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:39.158 Creating new GPT entries in memory. 00:05:39.158 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:39.158 other utilities. 00:05:39.158 06:30:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:39.158 06:30:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:39.158 06:30:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:39.158 06:30:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:39.158 06:30:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:40.533 Creating new GPT entries in memory. 00:05:40.533 The operation has completed successfully. 00:05:40.533 06:30:20 -- setup/common.sh@57 -- # (( part++ )) 00:05:40.533 06:30:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.533 06:30:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:40.533 06:30:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:40.533 06:30:20 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:41.468 The operation has completed successfully. 00:05:41.468 06:30:21 -- setup/common.sh@57 -- # (( part++ )) 00:05:41.468 06:30:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.468 06:30:21 -- setup/common.sh@62 -- # wait 64577 00:05:41.468 06:30:21 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:41.468 06:30:21 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:41.468 06:30:21 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:41.468 06:30:21 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:41.468 06:30:21 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:41.468 06:30:21 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:41.468 06:30:21 -- setup/devices.sh@161 -- # break 00:05:41.468 06:30:21 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:41.468 06:30:21 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:41.468 06:30:21 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:41.468 06:30:21 -- setup/devices.sh@166 -- # dm=dm-0 00:05:41.468 06:30:21 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:41.468 06:30:21 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:41.468 06:30:21 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:41.468 06:30:21 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:41.468 06:30:21 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:41.468 06:30:21 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:41.468 06:30:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:41.468 06:30:21 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:41.468 06:30:21 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:41.468 06:30:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:41.468 06:30:21 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:41.468 06:30:21 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:41.468 06:30:21 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:41.468 06:30:21 -- setup/devices.sh@53 -- # local found=0 00:05:41.468 06:30:21 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:41.468 06:30:21 -- setup/devices.sh@56 -- # : 00:05:41.468 06:30:21 -- setup/devices.sh@59 -- # local pci status 00:05:41.468 06:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.468 06:30:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:41.468 06:30:21 -- setup/devices.sh@47 -- # setup output config 00:05:41.468 06:30:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.468 06:30:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:41.468 06:30:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:41.468 06:30:21 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:41.468 06:30:21 -- setup/devices.sh@63 -- # found=1 00:05:41.468 06:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.468 06:30:21 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:41.468 06:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.726 06:30:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:41.727 06:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.986 06:30:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:41.986 06:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.986 06:30:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:41.986 06:30:21 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:41.986 06:30:21 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:41.986 06:30:21 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:41.986 06:30:21 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:41.986 06:30:21 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:41.986 06:30:21 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:41.986 06:30:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:41.986 06:30:21 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:41.986 06:30:21 -- setup/devices.sh@50 -- # local mount_point= 00:05:41.986 06:30:21 -- setup/devices.sh@51 -- # local test_file= 00:05:41.986 06:30:21 -- setup/devices.sh@53 -- # local found=0 00:05:41.986 06:30:21 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:41.986 06:30:21 -- setup/devices.sh@59 -- # local pci status 00:05:41.986 06:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.986 06:30:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:41.986 06:30:21 -- setup/devices.sh@47 -- # setup output config 00:05:41.986 06:30:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.986 06:30:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.245 06:30:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.245 06:30:21 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:42.245 06:30:21 -- setup/devices.sh@63 -- # found=1 00:05:42.245 06:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.245 06:30:21 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.245 06:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.503 06:30:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.503 06:30:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.503 06:30:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.503 06:30:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.762 06:30:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.762 06:30:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:42.762 06:30:22 -- setup/devices.sh@68 -- # return 0 00:05:42.762 06:30:22 -- setup/devices.sh@187 -- # cleanup_dm 00:05:42.762 06:30:22 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.762 06:30:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:42.762 06:30:22 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:42.762 06:30:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:42.762 06:30:22 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:42.762 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:42.762 06:30:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:42.762 06:30:22 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:42.762 00:05:42.762 real 0m4.534s 00:05:42.762 user 0m0.689s 00:05:42.762 sys 0m0.787s 00:05:42.762 06:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.762 06:30:22 -- common/autotest_common.sh@10 -- # set +x 00:05:42.762 ************************************ 00:05:42.762 END TEST dm_mount 00:05:42.762 ************************************ 00:05:42.762 06:30:22 -- setup/devices.sh@1 -- # cleanup 00:05:42.762 06:30:22 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:42.762 06:30:22 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.762 06:30:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:42.762 06:30:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:42.762 06:30:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:42.762 06:30:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:43.022 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.022 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.022 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:43.022 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:43.022 06:30:22 -- setup/devices.sh@12 -- # cleanup_dm 00:05:43.022 06:30:22 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:43.022 06:30:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:43.022 06:30:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.022 06:30:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:43.022 06:30:22 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.022 06:30:22 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:43.022 00:05:43.022 real 0m10.589s 00:05:43.022 user 0m2.374s 00:05:43.022 sys 0m2.583s 00:05:43.022 06:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.022 06:30:22 -- common/autotest_common.sh@10 -- # set +x 00:05:43.023 ************************************ 00:05:43.023 END TEST devices 00:05:43.023 ************************************ 00:05:43.023 00:05:43.023 real 0m21.773s 00:05:43.023 user 0m7.223s 00:05:43.023 sys 0m8.886s 00:05:43.023 06:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.023 06:30:22 -- common/autotest_common.sh@10 -- # set +x 00:05:43.023 ************************************ 00:05:43.023 END TEST setup.sh 00:05:43.023 ************************************ 00:05:43.023 06:30:22 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:43.286 Hugepages 00:05:43.286 node hugesize free / total 00:05:43.286 node0 1048576kB 0 / 0 00:05:43.286 node0 2048kB 2048 / 2048 00:05:43.286 00:05:43.286 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:43.286 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:43.607 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:43.607 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:43.607 06:30:23 -- spdk/autotest.sh@141 -- # uname -s 00:05:43.607 06:30:23 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:43.607 06:30:23 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:43.607 06:30:23 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.176 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.434 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.435 06:30:24 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:45.371 06:30:25 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:45.371 06:30:25 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:45.371 06:30:25 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:45.371 06:30:25 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:45.371 06:30:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:45.371 06:30:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:45.371 06:30:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:45.371 06:30:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:45.371 06:30:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:45.371 06:30:25 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:45.371 06:30:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:45.371 06:30:25 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:45.940 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.940 Waiting for block devices as requested 00:05:45.940 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:45.940 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:45.940 06:30:25 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:45.940 06:30:25 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:45.940 06:30:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:45.940 06:30:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:45.940 06:30:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:45.940 06:30:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:45.940 06:30:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:45.940 06:30:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:45.940 06:30:25 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:45.940 06:30:25 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:45.940 06:30:25 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:45.940 06:30:25 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:45.940 06:30:25 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:45.940 06:30:25 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:45.940 06:30:25 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:45.940 06:30:25 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:45.940 06:30:25 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:45.940 06:30:25 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:45.940 06:30:25 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:45.940 06:30:25 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:45.940 06:30:25 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:45.940 06:30:25 -- common/autotest_common.sh@1542 -- # continue 00:05:45.940 06:30:25 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:45.940 06:30:25 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:45.940 06:30:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:45.940 06:30:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:45.940 06:30:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:45.940 06:30:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:45.940 06:30:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:45.940 06:30:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:45.940 06:30:25 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:45.940 06:30:25 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:45.940 06:30:25 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:45.940 06:30:25 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:45.940 06:30:25 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:45.940 06:30:25 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:45.940 06:30:25 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:45.940 06:30:25 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:45.940 06:30:25 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:45.940 06:30:25 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:45.940 06:30:25 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:45.940 06:30:25 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:45.940 06:30:25 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:45.940 06:30:25 -- common/autotest_common.sh@1542 -- # continue 00:05:45.940 06:30:25 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:45.940 06:30:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:45.940 06:30:25 -- common/autotest_common.sh@10 -- # set +x 00:05:46.199 06:30:25 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:46.199 06:30:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.199 06:30:25 -- common/autotest_common.sh@10 -- # set +x 00:05:46.199 06:30:25 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.766 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.766 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.089 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.089 06:30:26 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:47.089 06:30:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.089 06:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.089 06:30:26 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:47.089 06:30:26 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:47.089 06:30:26 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:47.089 06:30:26 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:47.089 06:30:26 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:47.089 06:30:26 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:47.089 06:30:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:47.089 06:30:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:47.089 06:30:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.089 06:30:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:47.089 06:30:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:47.089 06:30:26 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:47.089 06:30:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:47.089 06:30:26 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:47.089 06:30:26 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:47.089 06:30:26 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:47.089 06:30:26 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.089 06:30:26 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:47.089 06:30:26 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:47.089 06:30:26 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:47.089 06:30:26 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.089 06:30:26 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:47.089 06:30:26 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:47.089 06:30:26 -- common/autotest_common.sh@1578 -- # return 0 00:05:47.089 06:30:26 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:47.089 06:30:26 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:47.089 06:30:26 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:47.089 06:30:26 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:47.089 06:30:26 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:47.089 06:30:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:47.089 06:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.089 06:30:26 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.089 06:30:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.089 06:30:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.089 06:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.089 ************************************ 00:05:47.089 START TEST env 00:05:47.089 ************************************ 00:05:47.089 06:30:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.363 * Looking for test storage... 00:05:47.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:47.363 06:30:27 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.363 06:30:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.363 06:30:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.363 06:30:27 -- common/autotest_common.sh@10 -- # set +x 00:05:47.363 ************************************ 00:05:47.363 START TEST env_memory 00:05:47.363 ************************************ 00:05:47.363 06:30:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.363 00:05:47.363 00:05:47.363 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.363 http://cunit.sourceforge.net/ 00:05:47.363 00:05:47.363 00:05:47.363 Suite: memory 00:05:47.363 Test: alloc and free memory map ...[2024-07-12 06:30:27.084291] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:47.363 passed 00:05:47.363 Test: mem map translation ...[2024-07-12 06:30:27.115946] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:47.363 [2024-07-12 06:30:27.116006] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:47.363 [2024-07-12 06:30:27.116060] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:47.363 [2024-07-12 06:30:27.116071] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:47.363 passed 00:05:47.363 Test: mem map registration ...[2024-07-12 06:30:27.181185] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:47.363 [2024-07-12 06:30:27.181225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:47.363 passed 00:05:47.363 Test: mem map adjacent registrations ...passed 00:05:47.363 00:05:47.363 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.363 suites 1 1 n/a 0 0 00:05:47.363 tests 4 4 4 0 0 00:05:47.363 asserts 152 152 152 0 n/a 00:05:47.363 00:05:47.363 Elapsed time = 0.217 seconds 00:05:47.363 00:05:47.363 real 0m0.236s 00:05:47.363 user 0m0.210s 00:05:47.363 sys 0m0.017s 00:05:47.363 06:30:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.363 06:30:27 -- common/autotest_common.sh@10 -- # set +x 00:05:47.363 ************************************ 00:05:47.363 END TEST env_memory 00:05:47.364 ************************************ 00:05:47.623 06:30:27 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:47.623 06:30:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.623 06:30:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.623 06:30:27 -- common/autotest_common.sh@10 -- # set +x 00:05:47.623 ************************************ 00:05:47.623 START TEST env_vtophys 00:05:47.623 ************************************ 00:05:47.623 06:30:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:47.623 EAL: lib.eal log level changed from notice to debug 00:05:47.623 EAL: Detected lcore 0 as core 0 on socket 0 00:05:47.623 EAL: Detected lcore 1 as core 0 on socket 0 00:05:47.623 EAL: Detected lcore 2 as core 0 on socket 0 00:05:47.623 EAL: Detected lcore 3 as core 0 on socket 0 00:05:47.623 EAL: Detected lcore 4 as core 0 on socket 0 00:05:47.623 EAL: Detected lcore 5 as core 0 on socket 0 00:05:47.623 EAL: Detected lcore 6 as core 0 on socket 0 00:05:47.623 EAL: Detected lcore 7 as core 0 on socket 0 00:05:47.623 EAL: Detected lcore 8 as core 0 on socket 0 00:05:47.623 EAL: Detected lcore 9 as core 0 on socket 0 00:05:47.623 EAL: Maximum logical cores by configuration: 128 00:05:47.623 EAL: Detected CPU lcores: 10 00:05:47.623 EAL: Detected NUMA nodes: 1 00:05:47.623 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:47.623 EAL: Detected shared linkage of DPDK 00:05:47.623 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:47.623 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:47.623 EAL: Registered [vdev] bus. 00:05:47.623 EAL: bus.vdev log level changed from disabled to notice 00:05:47.623 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:47.623 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:47.623 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:47.623 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:47.623 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:47.623 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:47.623 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:47.623 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:47.623 EAL: No shared files mode enabled, IPC will be disabled 00:05:47.623 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Selected IOVA mode 'PA' 00:05:47.624 EAL: Probing VFIO support... 00:05:47.624 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:47.624 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:47.624 EAL: Ask a virtual area of 0x2e000 bytes 00:05:47.624 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:47.624 EAL: Setting up physically contiguous memory... 00:05:47.624 EAL: Setting maximum number of open files to 524288 00:05:47.624 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:47.624 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:47.624 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.624 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:47.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.624 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.624 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:47.624 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:47.624 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.624 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:47.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.624 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.624 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:47.624 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:47.624 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.624 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:47.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.624 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.624 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:47.624 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:47.624 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.624 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:47.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.624 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.624 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:47.624 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:47.624 EAL: Hugepages will be freed exactly as allocated. 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: TSC frequency is ~2200000 KHz 00:05:47.624 EAL: Main lcore 0 is ready (tid=7ff038a3ba00;cpuset=[0]) 00:05:47.624 EAL: Trying to obtain current memory policy. 00:05:47.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.624 EAL: Restoring previous memory policy: 0 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was expanded by 2MB 00:05:47.624 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:47.624 EAL: Mem event callback 'spdk:(nil)' registered 00:05:47.624 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:47.624 00:05:47.624 00:05:47.624 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.624 http://cunit.sourceforge.net/ 00:05:47.624 00:05:47.624 00:05:47.624 Suite: components_suite 00:05:47.624 Test: vtophys_malloc_test ...passed 00:05:47.624 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:47.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.624 EAL: Restoring previous memory policy: 4 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was expanded by 4MB 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was shrunk by 4MB 00:05:47.624 EAL: Trying to obtain current memory policy. 00:05:47.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.624 EAL: Restoring previous memory policy: 4 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was expanded by 6MB 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was shrunk by 6MB 00:05:47.624 EAL: Trying to obtain current memory policy. 00:05:47.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.624 EAL: Restoring previous memory policy: 4 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was expanded by 10MB 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was shrunk by 10MB 00:05:47.624 EAL: Trying to obtain current memory policy. 00:05:47.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.624 EAL: Restoring previous memory policy: 4 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was expanded by 18MB 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was shrunk by 18MB 00:05:47.624 EAL: Trying to obtain current memory policy. 00:05:47.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.624 EAL: Restoring previous memory policy: 4 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was expanded by 34MB 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was shrunk by 34MB 00:05:47.624 EAL: Trying to obtain current memory policy. 00:05:47.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.624 EAL: Restoring previous memory policy: 4 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was expanded by 66MB 00:05:47.624 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.624 EAL: request: mp_malloc_sync 00:05:47.624 EAL: No shared files mode enabled, IPC is disabled 00:05:47.624 EAL: Heap on socket 0 was shrunk by 66MB 00:05:47.624 EAL: Trying to obtain current memory policy. 00:05:47.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.883 EAL: Restoring previous memory policy: 4 00:05:47.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.883 EAL: request: mp_malloc_sync 00:05:47.883 EAL: No shared files mode enabled, IPC is disabled 00:05:47.883 EAL: Heap on socket 0 was expanded by 130MB 00:05:47.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.883 EAL: request: mp_malloc_sync 00:05:47.883 EAL: No shared files mode enabled, IPC is disabled 00:05:47.883 EAL: Heap on socket 0 was shrunk by 130MB 00:05:47.883 EAL: Trying to obtain current memory policy. 00:05:47.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.883 EAL: Restoring previous memory policy: 4 00:05:47.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.883 EAL: request: mp_malloc_sync 00:05:47.883 EAL: No shared files mode enabled, IPC is disabled 00:05:47.883 EAL: Heap on socket 0 was expanded by 258MB 00:05:47.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.883 EAL: request: mp_malloc_sync 00:05:47.883 EAL: No shared files mode enabled, IPC is disabled 00:05:47.883 EAL: Heap on socket 0 was shrunk by 258MB 00:05:47.883 EAL: Trying to obtain current memory policy. 00:05:47.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.883 EAL: Restoring previous memory policy: 4 00:05:47.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.883 EAL: request: mp_malloc_sync 00:05:47.883 EAL: No shared files mode enabled, IPC is disabled 00:05:47.883 EAL: Heap on socket 0 was expanded by 514MB 00:05:47.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.143 EAL: request: mp_malloc_sync 00:05:48.143 EAL: No shared files mode enabled, IPC is disabled 00:05:48.143 EAL: Heap on socket 0 was shrunk by 514MB 00:05:48.143 EAL: Trying to obtain current memory policy. 00:05:48.143 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.143 EAL: Restoring previous memory policy: 4 00:05:48.143 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.143 EAL: request: mp_malloc_sync 00:05:48.143 EAL: No shared files mode enabled, IPC is disabled 00:05:48.143 EAL: Heap on socket 0 was expanded by 1026MB 00:05:48.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.402 passed 00:05:48.402 00:05:48.402 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.402 suites 1 1 n/a 0 0 00:05:48.402 tests 2 2 2 0 0 00:05:48.402 asserts 5358 5358 5358 0 n/a 00:05:48.402 00:05:48.402 Elapsed time = 0.692 seconds 00:05:48.402 EAL: request: mp_malloc_sync 00:05:48.402 EAL: No shared files mode enabled, IPC is disabled 00:05:48.402 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:48.402 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.402 EAL: request: mp_malloc_sync 00:05:48.402 EAL: No shared files mode enabled, IPC is disabled 00:05:48.402 EAL: Heap on socket 0 was shrunk by 2MB 00:05:48.402 EAL: No shared files mode enabled, IPC is disabled 00:05:48.402 EAL: No shared files mode enabled, IPC is disabled 00:05:48.402 EAL: No shared files mode enabled, IPC is disabled 00:05:48.402 00:05:48.402 real 0m0.896s 00:05:48.402 user 0m0.465s 00:05:48.402 sys 0m0.299s 00:05:48.402 06:30:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.402 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.402 ************************************ 00:05:48.402 END TEST env_vtophys 00:05:48.402 ************************************ 00:05:48.402 06:30:28 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:48.402 06:30:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.402 06:30:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.402 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.402 ************************************ 00:05:48.402 START TEST env_pci 00:05:48.402 ************************************ 00:05:48.402 06:30:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:48.402 00:05:48.402 00:05:48.402 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.402 http://cunit.sourceforge.net/ 00:05:48.402 00:05:48.402 00:05:48.402 Suite: pci 00:05:48.402 Test: pci_hook ...[2024-07-12 06:30:28.278110] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65702 has claimed it 00:05:48.402 passed 00:05:48.402 00:05:48.402 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.402 suites 1 1 n/a 0 0 00:05:48.402 tests 1 1 1 0 0 00:05:48.402 asserts 25 25 25 0 n/a 00:05:48.402 00:05:48.402 Elapsed time = 0.002 seconds 00:05:48.402 EAL: Cannot find device (10000:00:01.0) 00:05:48.402 EAL: Failed to attach device on primary process 00:05:48.402 00:05:48.402 real 0m0.018s 00:05:48.402 user 0m0.006s 00:05:48.402 sys 0m0.011s 00:05:48.402 ************************************ 00:05:48.402 END TEST env_pci 00:05:48.402 ************************************ 00:05:48.402 06:30:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.402 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.662 06:30:28 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:48.662 06:30:28 -- env/env.sh@15 -- # uname 00:05:48.662 06:30:28 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:48.662 06:30:28 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:48.662 06:30:28 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.662 06:30:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:48.662 06:30:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.662 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.662 ************************************ 00:05:48.662 START TEST env_dpdk_post_init 00:05:48.662 ************************************ 00:05:48.662 06:30:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.662 EAL: Detected CPU lcores: 10 00:05:48.662 EAL: Detected NUMA nodes: 1 00:05:48.662 EAL: Detected shared linkage of DPDK 00:05:48.662 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.662 EAL: Selected IOVA mode 'PA' 00:05:48.662 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:48.662 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:48.662 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:48.662 Starting DPDK initialization... 00:05:48.662 Starting SPDK post initialization... 00:05:48.662 SPDK NVMe probe 00:05:48.662 Attaching to 0000:00:06.0 00:05:48.662 Attaching to 0000:00:07.0 00:05:48.662 Attached to 0000:00:06.0 00:05:48.662 Attached to 0000:00:07.0 00:05:48.662 Cleaning up... 00:05:48.662 00:05:48.662 real 0m0.188s 00:05:48.662 user 0m0.052s 00:05:48.662 sys 0m0.036s 00:05:48.662 06:30:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.662 ************************************ 00:05:48.662 END TEST env_dpdk_post_init 00:05:48.662 ************************************ 00:05:48.662 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.662 06:30:28 -- env/env.sh@26 -- # uname 00:05:48.662 06:30:28 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:48.662 06:30:28 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.662 06:30:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.662 06:30:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.662 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.921 ************************************ 00:05:48.921 START TEST env_mem_callbacks 00:05:48.921 ************************************ 00:05:48.921 06:30:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.921 EAL: Detected CPU lcores: 10 00:05:48.921 EAL: Detected NUMA nodes: 1 00:05:48.921 EAL: Detected shared linkage of DPDK 00:05:48.921 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.921 EAL: Selected IOVA mode 'PA' 00:05:48.921 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:48.921 00:05:48.921 00:05:48.921 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.921 http://cunit.sourceforge.net/ 00:05:48.921 00:05:48.921 00:05:48.921 Suite: memory 00:05:48.921 Test: test ... 00:05:48.921 register 0x200000200000 2097152 00:05:48.921 malloc 3145728 00:05:48.921 register 0x200000400000 4194304 00:05:48.922 buf 0x200000500000 len 3145728 PASSED 00:05:48.922 malloc 64 00:05:48.922 buf 0x2000004fff40 len 64 PASSED 00:05:48.922 malloc 4194304 00:05:48.922 register 0x200000800000 6291456 00:05:48.922 buf 0x200000a00000 len 4194304 PASSED 00:05:48.922 free 0x200000500000 3145728 00:05:48.922 free 0x2000004fff40 64 00:05:48.922 unregister 0x200000400000 4194304 PASSED 00:05:48.922 free 0x200000a00000 4194304 00:05:48.922 unregister 0x200000800000 6291456 PASSED 00:05:48.922 malloc 8388608 00:05:48.922 register 0x200000400000 10485760 00:05:48.922 buf 0x200000600000 len 8388608 PASSED 00:05:48.922 free 0x200000600000 8388608 00:05:48.922 unregister 0x200000400000 10485760 PASSED 00:05:48.922 passed 00:05:48.922 00:05:48.922 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.922 suites 1 1 n/a 0 0 00:05:48.922 tests 1 1 1 0 0 00:05:48.922 asserts 15 15 15 0 n/a 00:05:48.922 00:05:48.922 Elapsed time = 0.008 seconds 00:05:48.922 00:05:48.922 real 0m0.143s 00:05:48.922 user 0m0.020s 00:05:48.922 sys 0m0.023s 00:05:48.922 06:30:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.922 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.922 ************************************ 00:05:48.922 END TEST env_mem_callbacks 00:05:48.922 ************************************ 00:05:48.922 00:05:48.922 real 0m1.822s 00:05:48.922 user 0m0.875s 00:05:48.922 sys 0m0.594s 00:05:48.922 06:30:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.922 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.922 ************************************ 00:05:48.922 END TEST env 00:05:48.922 ************************************ 00:05:48.922 06:30:28 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:48.922 06:30:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.922 06:30:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.922 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.922 ************************************ 00:05:48.922 START TEST rpc 00:05:48.922 ************************************ 00:05:48.922 06:30:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:49.180 * Looking for test storage... 00:05:49.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.180 06:30:28 -- rpc/rpc.sh@65 -- # spdk_pid=65812 00:05:49.180 06:30:28 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.180 06:30:28 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:49.180 06:30:28 -- rpc/rpc.sh@67 -- # waitforlisten 65812 00:05:49.180 06:30:28 -- common/autotest_common.sh@819 -- # '[' -z 65812 ']' 00:05:49.180 06:30:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.180 06:30:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.180 06:30:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.180 06:30:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.180 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.180 [2024-07-12 06:30:28.964827] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:49.180 [2024-07-12 06:30:28.964935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65812 ] 00:05:49.439 [2024-07-12 06:30:29.109390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.439 [2024-07-12 06:30:29.147846] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.439 [2024-07-12 06:30:29.148033] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:49.439 [2024-07-12 06:30:29.148050] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65812' to capture a snapshot of events at runtime. 00:05:49.439 [2024-07-12 06:30:29.148061] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65812 for offline analysis/debug. 00:05:49.439 [2024-07-12 06:30:29.148100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.375 06:30:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:50.375 06:30:30 -- common/autotest_common.sh@852 -- # return 0 00:05:50.375 06:30:30 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:50.375 06:30:30 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:50.375 06:30:30 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:50.375 06:30:30 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:50.375 06:30:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.375 06:30:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.375 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.375 ************************************ 00:05:50.375 START TEST rpc_integrity 00:05:50.375 ************************************ 00:05:50.376 06:30:30 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:50.376 06:30:30 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:50.376 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.376 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.376 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.376 06:30:30 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:50.376 06:30:30 -- rpc/rpc.sh@13 -- # jq length 00:05:50.376 06:30:30 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:50.376 06:30:30 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:50.376 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.376 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.376 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.376 06:30:30 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:50.376 06:30:30 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:50.376 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.376 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.376 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.376 06:30:30 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:50.376 { 00:05:50.376 "name": "Malloc0", 00:05:50.376 "aliases": [ 00:05:50.376 "9ab7bb55-0bef-4f1f-b0b6-753848f1b7be" 00:05:50.376 ], 00:05:50.376 "product_name": "Malloc disk", 00:05:50.376 "block_size": 512, 00:05:50.376 "num_blocks": 16384, 00:05:50.376 "uuid": "9ab7bb55-0bef-4f1f-b0b6-753848f1b7be", 00:05:50.376 "assigned_rate_limits": { 00:05:50.376 "rw_ios_per_sec": 0, 00:05:50.376 "rw_mbytes_per_sec": 0, 00:05:50.376 "r_mbytes_per_sec": 0, 00:05:50.376 "w_mbytes_per_sec": 0 00:05:50.376 }, 00:05:50.376 "claimed": false, 00:05:50.376 "zoned": false, 00:05:50.376 "supported_io_types": { 00:05:50.376 "read": true, 00:05:50.376 "write": true, 00:05:50.376 "unmap": true, 00:05:50.376 "write_zeroes": true, 00:05:50.376 "flush": true, 00:05:50.376 "reset": true, 00:05:50.376 "compare": false, 00:05:50.376 "compare_and_write": false, 00:05:50.376 "abort": true, 00:05:50.376 "nvme_admin": false, 00:05:50.376 "nvme_io": false 00:05:50.376 }, 00:05:50.376 "memory_domains": [ 00:05:50.376 { 00:05:50.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.376 "dma_device_type": 2 00:05:50.376 } 00:05:50.376 ], 00:05:50.376 "driver_specific": {} 00:05:50.376 } 00:05:50.376 ]' 00:05:50.376 06:30:30 -- rpc/rpc.sh@17 -- # jq length 00:05:50.376 06:30:30 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:50.376 06:30:30 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:50.376 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.376 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.376 [2024-07-12 06:30:30.193726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:50.376 [2024-07-12 06:30:30.193801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:50.376 [2024-07-12 06:30:30.193835] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d57580 00:05:50.376 [2024-07-12 06:30:30.193852] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:50.376 [2024-07-12 06:30:30.195925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:50.376 [2024-07-12 06:30:30.195993] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:50.376 Passthru0 00:05:50.376 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.376 06:30:30 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:50.376 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.376 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.376 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.376 06:30:30 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:50.376 { 00:05:50.376 "name": "Malloc0", 00:05:50.376 "aliases": [ 00:05:50.376 "9ab7bb55-0bef-4f1f-b0b6-753848f1b7be" 00:05:50.376 ], 00:05:50.376 "product_name": "Malloc disk", 00:05:50.376 "block_size": 512, 00:05:50.376 "num_blocks": 16384, 00:05:50.376 "uuid": "9ab7bb55-0bef-4f1f-b0b6-753848f1b7be", 00:05:50.376 "assigned_rate_limits": { 00:05:50.376 "rw_ios_per_sec": 0, 00:05:50.376 "rw_mbytes_per_sec": 0, 00:05:50.376 "r_mbytes_per_sec": 0, 00:05:50.376 "w_mbytes_per_sec": 0 00:05:50.376 }, 00:05:50.376 "claimed": true, 00:05:50.376 "claim_type": "exclusive_write", 00:05:50.376 "zoned": false, 00:05:50.376 "supported_io_types": { 00:05:50.376 "read": true, 00:05:50.376 "write": true, 00:05:50.376 "unmap": true, 00:05:50.376 "write_zeroes": true, 00:05:50.376 "flush": true, 00:05:50.376 "reset": true, 00:05:50.376 "compare": false, 00:05:50.376 "compare_and_write": false, 00:05:50.376 "abort": true, 00:05:50.376 "nvme_admin": false, 00:05:50.376 "nvme_io": false 00:05:50.376 }, 00:05:50.376 "memory_domains": [ 00:05:50.376 { 00:05:50.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.376 "dma_device_type": 2 00:05:50.376 } 00:05:50.376 ], 00:05:50.376 "driver_specific": {} 00:05:50.376 }, 00:05:50.376 { 00:05:50.376 "name": "Passthru0", 00:05:50.376 "aliases": [ 00:05:50.376 "cf4b6fb9-4e25-55ec-93ee-921361e9abc9" 00:05:50.376 ], 00:05:50.376 "product_name": "passthru", 00:05:50.376 "block_size": 512, 00:05:50.376 "num_blocks": 16384, 00:05:50.376 "uuid": "cf4b6fb9-4e25-55ec-93ee-921361e9abc9", 00:05:50.376 "assigned_rate_limits": { 00:05:50.376 "rw_ios_per_sec": 0, 00:05:50.376 "rw_mbytes_per_sec": 0, 00:05:50.376 "r_mbytes_per_sec": 0, 00:05:50.376 "w_mbytes_per_sec": 0 00:05:50.376 }, 00:05:50.376 "claimed": false, 00:05:50.376 "zoned": false, 00:05:50.376 "supported_io_types": { 00:05:50.376 "read": true, 00:05:50.376 "write": true, 00:05:50.376 "unmap": true, 00:05:50.376 "write_zeroes": true, 00:05:50.376 "flush": true, 00:05:50.376 "reset": true, 00:05:50.376 "compare": false, 00:05:50.376 "compare_and_write": false, 00:05:50.376 "abort": true, 00:05:50.376 "nvme_admin": false, 00:05:50.376 "nvme_io": false 00:05:50.376 }, 00:05:50.376 "memory_domains": [ 00:05:50.376 { 00:05:50.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.376 "dma_device_type": 2 00:05:50.376 } 00:05:50.376 ], 00:05:50.376 "driver_specific": { 00:05:50.376 "passthru": { 00:05:50.376 "name": "Passthru0", 00:05:50.376 "base_bdev_name": "Malloc0" 00:05:50.376 } 00:05:50.376 } 00:05:50.376 } 00:05:50.376 ]' 00:05:50.376 06:30:30 -- rpc/rpc.sh@21 -- # jq length 00:05:50.376 06:30:30 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:50.376 06:30:30 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:50.376 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.376 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.376 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.376 06:30:30 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:50.376 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.376 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.376 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.376 06:30:30 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:50.376 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.376 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.636 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.636 06:30:30 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:50.636 06:30:30 -- rpc/rpc.sh@26 -- # jq length 00:05:50.636 06:30:30 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:50.636 00:05:50.636 real 0m0.308s 00:05:50.636 user 0m0.207s 00:05:50.636 sys 0m0.040s 00:05:50.636 06:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.636 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.636 ************************************ 00:05:50.636 END TEST rpc_integrity 00:05:50.636 ************************************ 00:05:50.636 06:30:30 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:50.636 06:30:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.636 06:30:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.636 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.636 ************************************ 00:05:50.636 START TEST rpc_plugins 00:05:50.636 ************************************ 00:05:50.636 06:30:30 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:50.636 06:30:30 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:50.636 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.636 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.636 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.636 06:30:30 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:50.637 06:30:30 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:50.637 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.637 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.637 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.637 06:30:30 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:50.637 { 00:05:50.637 "name": "Malloc1", 00:05:50.637 "aliases": [ 00:05:50.637 "a0f30c6e-b5e6-4b13-9b27-c889642ec6cc" 00:05:50.637 ], 00:05:50.637 "product_name": "Malloc disk", 00:05:50.637 "block_size": 4096, 00:05:50.637 "num_blocks": 256, 00:05:50.637 "uuid": "a0f30c6e-b5e6-4b13-9b27-c889642ec6cc", 00:05:50.637 "assigned_rate_limits": { 00:05:50.637 "rw_ios_per_sec": 0, 00:05:50.637 "rw_mbytes_per_sec": 0, 00:05:50.637 "r_mbytes_per_sec": 0, 00:05:50.637 "w_mbytes_per_sec": 0 00:05:50.637 }, 00:05:50.637 "claimed": false, 00:05:50.637 "zoned": false, 00:05:50.637 "supported_io_types": { 00:05:50.637 "read": true, 00:05:50.637 "write": true, 00:05:50.637 "unmap": true, 00:05:50.637 "write_zeroes": true, 00:05:50.637 "flush": true, 00:05:50.637 "reset": true, 00:05:50.637 "compare": false, 00:05:50.637 "compare_and_write": false, 00:05:50.637 "abort": true, 00:05:50.637 "nvme_admin": false, 00:05:50.637 "nvme_io": false 00:05:50.637 }, 00:05:50.637 "memory_domains": [ 00:05:50.637 { 00:05:50.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.637 "dma_device_type": 2 00:05:50.637 } 00:05:50.637 ], 00:05:50.637 "driver_specific": {} 00:05:50.637 } 00:05:50.637 ]' 00:05:50.637 06:30:30 -- rpc/rpc.sh@32 -- # jq length 00:05:50.637 06:30:30 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:50.637 06:30:30 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:50.637 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.637 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.637 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.637 06:30:30 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:50.637 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.637 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.637 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.637 06:30:30 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:50.637 06:30:30 -- rpc/rpc.sh@36 -- # jq length 00:05:50.898 06:30:30 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:50.898 00:05:50.898 real 0m0.164s 00:05:50.898 user 0m0.100s 00:05:50.898 sys 0m0.027s 00:05:50.898 06:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.898 ************************************ 00:05:50.898 END TEST rpc_plugins 00:05:50.898 ************************************ 00:05:50.898 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.898 06:30:30 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:50.898 06:30:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.898 06:30:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.898 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.898 ************************************ 00:05:50.898 START TEST rpc_trace_cmd_test 00:05:50.898 ************************************ 00:05:50.898 06:30:30 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:50.898 06:30:30 -- rpc/rpc.sh@40 -- # local info 00:05:50.898 06:30:30 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:50.898 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.898 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.898 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.898 06:30:30 -- rpc/rpc.sh@42 -- # info='{ 00:05:50.898 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65812", 00:05:50.898 "tpoint_group_mask": "0x8", 00:05:50.898 "iscsi_conn": { 00:05:50.898 "mask": "0x2", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 }, 00:05:50.898 "scsi": { 00:05:50.898 "mask": "0x4", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 }, 00:05:50.898 "bdev": { 00:05:50.898 "mask": "0x8", 00:05:50.898 "tpoint_mask": "0xffffffffffffffff" 00:05:50.898 }, 00:05:50.898 "nvmf_rdma": { 00:05:50.898 "mask": "0x10", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 }, 00:05:50.898 "nvmf_tcp": { 00:05:50.898 "mask": "0x20", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 }, 00:05:50.898 "ftl": { 00:05:50.898 "mask": "0x40", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 }, 00:05:50.898 "blobfs": { 00:05:50.898 "mask": "0x80", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 }, 00:05:50.898 "dsa": { 00:05:50.898 "mask": "0x200", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 }, 00:05:50.898 "thread": { 00:05:50.898 "mask": "0x400", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 }, 00:05:50.898 "nvme_pcie": { 00:05:50.898 "mask": "0x800", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 }, 00:05:50.898 "iaa": { 00:05:50.898 "mask": "0x1000", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 }, 00:05:50.898 "nvme_tcp": { 00:05:50.898 "mask": "0x2000", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 }, 00:05:50.898 "bdev_nvme": { 00:05:50.898 "mask": "0x4000", 00:05:50.898 "tpoint_mask": "0x0" 00:05:50.898 } 00:05:50.898 }' 00:05:50.898 06:30:30 -- rpc/rpc.sh@43 -- # jq length 00:05:50.898 06:30:30 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:50.898 06:30:30 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:50.898 06:30:30 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:50.898 06:30:30 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:50.898 06:30:30 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:50.898 06:30:30 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:51.158 06:30:30 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:51.158 06:30:30 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:51.158 06:30:30 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:51.158 00:05:51.158 real 0m0.286s 00:05:51.158 user 0m0.233s 00:05:51.158 sys 0m0.042s 00:05:51.158 ************************************ 00:05:51.158 06:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.158 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:51.158 END TEST rpc_trace_cmd_test 00:05:51.158 ************************************ 00:05:51.158 06:30:30 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:51.158 06:30:30 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:51.158 06:30:30 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:51.158 06:30:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.158 06:30:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.158 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:51.158 ************************************ 00:05:51.158 START TEST rpc_daemon_integrity 00:05:51.158 ************************************ 00:05:51.158 06:30:30 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:51.158 06:30:30 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:51.158 06:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.158 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:51.158 06:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.158 06:30:30 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:51.158 06:30:30 -- rpc/rpc.sh@13 -- # jq length 00:05:51.159 06:30:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:51.159 06:30:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:51.159 06:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.159 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.159 06:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.159 06:30:31 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:51.159 06:30:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:51.159 06:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.159 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.159 06:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.418 06:30:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.418 { 00:05:51.418 "name": "Malloc2", 00:05:51.418 "aliases": [ 00:05:51.418 "8a4a91ba-454d-4ca8-98c0-60c786a78e1d" 00:05:51.418 ], 00:05:51.418 "product_name": "Malloc disk", 00:05:51.418 "block_size": 512, 00:05:51.418 "num_blocks": 16384, 00:05:51.418 "uuid": "8a4a91ba-454d-4ca8-98c0-60c786a78e1d", 00:05:51.418 "assigned_rate_limits": { 00:05:51.418 "rw_ios_per_sec": 0, 00:05:51.418 "rw_mbytes_per_sec": 0, 00:05:51.418 "r_mbytes_per_sec": 0, 00:05:51.418 "w_mbytes_per_sec": 0 00:05:51.418 }, 00:05:51.418 "claimed": false, 00:05:51.418 "zoned": false, 00:05:51.418 "supported_io_types": { 00:05:51.418 "read": true, 00:05:51.418 "write": true, 00:05:51.418 "unmap": true, 00:05:51.418 "write_zeroes": true, 00:05:51.418 "flush": true, 00:05:51.418 "reset": true, 00:05:51.418 "compare": false, 00:05:51.418 "compare_and_write": false, 00:05:51.418 "abort": true, 00:05:51.418 "nvme_admin": false, 00:05:51.418 "nvme_io": false 00:05:51.418 }, 00:05:51.418 "memory_domains": [ 00:05:51.418 { 00:05:51.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.418 "dma_device_type": 2 00:05:51.418 } 00:05:51.418 ], 00:05:51.418 "driver_specific": {} 00:05:51.418 } 00:05:51.418 ]' 00:05:51.418 06:30:31 -- rpc/rpc.sh@17 -- # jq length 00:05:51.418 06:30:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.418 06:30:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:51.418 06:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.418 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.418 [2024-07-12 06:30:31.135240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:51.418 [2024-07-12 06:30:31.135410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.418 [2024-07-12 06:30:31.135456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d58d20 00:05:51.418 [2024-07-12 06:30:31.135472] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.418 [2024-07-12 06:30:31.137220] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.418 [2024-07-12 06:30:31.137267] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.418 Passthru0 00:05:51.418 06:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.418 06:30:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.418 06:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.418 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.418 06:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.418 06:30:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.418 { 00:05:51.418 "name": "Malloc2", 00:05:51.418 "aliases": [ 00:05:51.418 "8a4a91ba-454d-4ca8-98c0-60c786a78e1d" 00:05:51.419 ], 00:05:51.419 "product_name": "Malloc disk", 00:05:51.419 "block_size": 512, 00:05:51.419 "num_blocks": 16384, 00:05:51.419 "uuid": "8a4a91ba-454d-4ca8-98c0-60c786a78e1d", 00:05:51.419 "assigned_rate_limits": { 00:05:51.419 "rw_ios_per_sec": 0, 00:05:51.419 "rw_mbytes_per_sec": 0, 00:05:51.419 "r_mbytes_per_sec": 0, 00:05:51.419 "w_mbytes_per_sec": 0 00:05:51.419 }, 00:05:51.419 "claimed": true, 00:05:51.419 "claim_type": "exclusive_write", 00:05:51.419 "zoned": false, 00:05:51.419 "supported_io_types": { 00:05:51.419 "read": true, 00:05:51.419 "write": true, 00:05:51.419 "unmap": true, 00:05:51.419 "write_zeroes": true, 00:05:51.419 "flush": true, 00:05:51.419 "reset": true, 00:05:51.419 "compare": false, 00:05:51.419 "compare_and_write": false, 00:05:51.419 "abort": true, 00:05:51.419 "nvme_admin": false, 00:05:51.419 "nvme_io": false 00:05:51.419 }, 00:05:51.419 "memory_domains": [ 00:05:51.419 { 00:05:51.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.419 "dma_device_type": 2 00:05:51.419 } 00:05:51.419 ], 00:05:51.419 "driver_specific": {} 00:05:51.419 }, 00:05:51.419 { 00:05:51.419 "name": "Passthru0", 00:05:51.419 "aliases": [ 00:05:51.419 "4433a660-efe7-558f-b251-3434e0958d3c" 00:05:51.419 ], 00:05:51.419 "product_name": "passthru", 00:05:51.419 "block_size": 512, 00:05:51.419 "num_blocks": 16384, 00:05:51.419 "uuid": "4433a660-efe7-558f-b251-3434e0958d3c", 00:05:51.419 "assigned_rate_limits": { 00:05:51.419 "rw_ios_per_sec": 0, 00:05:51.419 "rw_mbytes_per_sec": 0, 00:05:51.419 "r_mbytes_per_sec": 0, 00:05:51.419 "w_mbytes_per_sec": 0 00:05:51.419 }, 00:05:51.419 "claimed": false, 00:05:51.419 "zoned": false, 00:05:51.419 "supported_io_types": { 00:05:51.419 "read": true, 00:05:51.419 "write": true, 00:05:51.419 "unmap": true, 00:05:51.419 "write_zeroes": true, 00:05:51.419 "flush": true, 00:05:51.419 "reset": true, 00:05:51.419 "compare": false, 00:05:51.419 "compare_and_write": false, 00:05:51.419 "abort": true, 00:05:51.419 "nvme_admin": false, 00:05:51.419 "nvme_io": false 00:05:51.419 }, 00:05:51.419 "memory_domains": [ 00:05:51.419 { 00:05:51.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.419 "dma_device_type": 2 00:05:51.419 } 00:05:51.419 ], 00:05:51.419 "driver_specific": { 00:05:51.419 "passthru": { 00:05:51.419 "name": "Passthru0", 00:05:51.419 "base_bdev_name": "Malloc2" 00:05:51.419 } 00:05:51.419 } 00:05:51.419 } 00:05:51.419 ]' 00:05:51.419 06:30:31 -- rpc/rpc.sh@21 -- # jq length 00:05:51.419 06:30:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.419 06:30:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.419 06:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.419 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.419 06:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.419 06:30:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:51.419 06:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.419 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.419 06:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.419 06:30:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.419 06:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.419 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.419 06:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.419 06:30:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.419 06:30:31 -- rpc/rpc.sh@26 -- # jq length 00:05:51.419 06:30:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.419 00:05:51.419 real 0m0.334s 00:05:51.419 user 0m0.221s 00:05:51.419 sys 0m0.043s 00:05:51.419 06:30:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.419 ************************************ 00:05:51.419 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.419 END TEST rpc_daemon_integrity 00:05:51.419 ************************************ 00:05:51.680 06:30:31 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:51.680 06:30:31 -- rpc/rpc.sh@84 -- # killprocess 65812 00:05:51.680 06:30:31 -- common/autotest_common.sh@926 -- # '[' -z 65812 ']' 00:05:51.680 06:30:31 -- common/autotest_common.sh@930 -- # kill -0 65812 00:05:51.680 06:30:31 -- common/autotest_common.sh@931 -- # uname 00:05:51.680 06:30:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.680 06:30:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65812 00:05:51.680 06:30:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:51.680 06:30:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:51.680 killing process with pid 65812 00:05:51.680 06:30:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65812' 00:05:51.680 06:30:31 -- common/autotest_common.sh@945 -- # kill 65812 00:05:51.680 06:30:31 -- common/autotest_common.sh@950 -- # wait 65812 00:05:51.943 00:05:51.943 real 0m2.812s 00:05:51.943 user 0m3.801s 00:05:51.943 sys 0m0.640s 00:05:51.943 06:30:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.943 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.943 ************************************ 00:05:51.943 END TEST rpc 00:05:51.943 ************************************ 00:05:51.943 06:30:31 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:51.943 06:30:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.943 06:30:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.943 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.943 ************************************ 00:05:51.943 START TEST rpc_client 00:05:51.943 ************************************ 00:05:51.943 06:30:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:51.943 * Looking for test storage... 00:05:51.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:51.943 06:30:31 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:51.943 OK 00:05:51.943 06:30:31 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:51.943 00:05:51.943 real 0m0.102s 00:05:51.943 user 0m0.047s 00:05:51.943 sys 0m0.061s 00:05:51.943 06:30:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.943 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.943 ************************************ 00:05:51.943 END TEST rpc_client 00:05:51.943 ************************************ 00:05:51.943 06:30:31 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:51.943 06:30:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.943 06:30:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.943 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.943 ************************************ 00:05:51.943 START TEST json_config 00:05:51.943 ************************************ 00:05:51.943 06:30:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:52.201 06:30:31 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:52.201 06:30:31 -- nvmf/common.sh@7 -- # uname -s 00:05:52.201 06:30:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.201 06:30:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.201 06:30:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.201 06:30:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.201 06:30:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.201 06:30:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.201 06:30:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.201 06:30:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.201 06:30:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.201 06:30:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.201 06:30:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:05:52.201 06:30:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:05:52.201 06:30:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.201 06:30:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.201 06:30:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:52.201 06:30:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:52.201 06:30:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.201 06:30:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.201 06:30:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.201 06:30:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.201 06:30:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.201 06:30:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.201 06:30:31 -- paths/export.sh@5 -- # export PATH 00:05:52.201 06:30:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.201 06:30:31 -- nvmf/common.sh@46 -- # : 0 00:05:52.201 06:30:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:52.201 06:30:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:52.201 06:30:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:52.201 06:30:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.201 06:30:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.201 06:30:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:52.201 06:30:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:52.201 06:30:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:52.201 06:30:31 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:52.201 06:30:31 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:52.201 06:30:31 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:52.201 06:30:31 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:52.201 06:30:31 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:52.201 06:30:31 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:52.201 06:30:31 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:52.201 06:30:31 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:52.201 06:30:31 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:52.201 06:30:31 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:52.201 06:30:31 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:52.201 06:30:31 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:52.201 06:30:31 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:52.201 06:30:31 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:52.201 INFO: JSON configuration test init 00:05:52.201 06:30:31 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:52.201 06:30:31 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:52.201 06:30:31 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:52.201 06:30:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:52.201 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.201 06:30:31 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:52.201 06:30:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:52.201 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.201 06:30:31 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:52.201 06:30:31 -- json_config/json_config.sh@98 -- # local app=target 00:05:52.201 06:30:31 -- json_config/json_config.sh@99 -- # shift 00:05:52.201 06:30:31 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:52.201 06:30:31 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:52.201 06:30:31 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:52.201 06:30:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:52.201 06:30:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:52.201 06:30:31 -- json_config/json_config.sh@111 -- # app_pid[$app]=66049 00:05:52.201 Waiting for target to run... 00:05:52.201 06:30:31 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:52.201 06:30:31 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:52.201 06:30:31 -- json_config/json_config.sh@114 -- # waitforlisten 66049 /var/tmp/spdk_tgt.sock 00:05:52.201 06:30:31 -- common/autotest_common.sh@819 -- # '[' -z 66049 ']' 00:05:52.201 06:30:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:52.201 06:30:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:52.201 06:30:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:52.201 06:30:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.201 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.201 [2024-07-12 06:30:31.975904] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:52.201 [2024-07-12 06:30:31.976011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66049 ] 00:05:52.460 [2024-07-12 06:30:32.259139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.460 [2024-07-12 06:30:32.278431] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.460 [2024-07-12 06:30:32.278662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.028 06:30:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.028 06:30:32 -- common/autotest_common.sh@852 -- # return 0 00:05:53.028 00:05:53.028 06:30:32 -- json_config/json_config.sh@115 -- # echo '' 00:05:53.028 06:30:32 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:53.028 06:30:32 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:53.028 06:30:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:53.028 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:53.028 06:30:32 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:53.028 06:30:32 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:53.028 06:30:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.028 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:53.290 06:30:32 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:53.290 06:30:32 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:53.290 06:30:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:53.548 06:30:33 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:53.548 06:30:33 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:53.548 06:30:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:53.548 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:53.548 06:30:33 -- json_config/json_config.sh@48 -- # local ret=0 00:05:53.548 06:30:33 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:53.548 06:30:33 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:53.548 06:30:33 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:53.548 06:30:33 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:53.548 06:30:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:53.807 06:30:33 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:53.807 06:30:33 -- json_config/json_config.sh@51 -- # local get_types 00:05:53.807 06:30:33 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:53.807 06:30:33 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:53.807 06:30:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.807 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:54.065 06:30:33 -- json_config/json_config.sh@58 -- # return 0 00:05:54.065 06:30:33 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:54.065 06:30:33 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:54.065 06:30:33 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:54.065 06:30:33 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:54.065 06:30:33 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:54.065 06:30:33 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:54.065 06:30:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:54.065 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:54.065 06:30:33 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:54.065 06:30:33 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:54.065 06:30:33 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:54.065 06:30:33 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:54.065 06:30:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:54.324 MallocForNvmf0 00:05:54.324 06:30:34 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:54.324 06:30:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:54.582 MallocForNvmf1 00:05:54.582 06:30:34 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:54.582 06:30:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:54.841 [2024-07-12 06:30:34.572165] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.841 06:30:34 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:54.841 06:30:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:55.100 06:30:34 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:55.100 06:30:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:55.358 06:30:35 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:55.358 06:30:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:55.616 06:30:35 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:55.616 06:30:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:55.874 [2024-07-12 06:30:35.616867] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:55.874 06:30:35 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:55.874 06:30:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:55.874 06:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:55.874 06:30:35 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:55.874 06:30:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:55.874 06:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:55.874 06:30:35 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:55.874 06:30:35 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:55.874 06:30:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:56.132 MallocBdevForConfigChangeCheck 00:05:56.132 06:30:35 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:56.132 06:30:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:56.132 06:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:56.132 06:30:35 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:56.133 06:30:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.698 INFO: shutting down applications... 00:05:56.698 06:30:36 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:56.698 06:30:36 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:56.698 06:30:36 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:56.698 06:30:36 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:56.698 06:30:36 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:56.956 Calling clear_iscsi_subsystem 00:05:56.956 Calling clear_nvmf_subsystem 00:05:56.956 Calling clear_nbd_subsystem 00:05:56.956 Calling clear_ublk_subsystem 00:05:56.956 Calling clear_vhost_blk_subsystem 00:05:56.956 Calling clear_vhost_scsi_subsystem 00:05:56.956 Calling clear_scheduler_subsystem 00:05:56.956 Calling clear_bdev_subsystem 00:05:56.956 Calling clear_accel_subsystem 00:05:56.956 Calling clear_vmd_subsystem 00:05:56.956 Calling clear_sock_subsystem 00:05:56.956 Calling clear_iobuf_subsystem 00:05:56.956 06:30:36 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:56.956 06:30:36 -- json_config/json_config.sh@396 -- # count=100 00:05:56.956 06:30:36 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:56.956 06:30:36 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.956 06:30:36 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:56.956 06:30:36 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:57.213 06:30:37 -- json_config/json_config.sh@398 -- # break 00:05:57.213 06:30:37 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:57.213 06:30:37 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:57.213 06:30:37 -- json_config/json_config.sh@120 -- # local app=target 00:05:57.213 06:30:37 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:57.213 06:30:37 -- json_config/json_config.sh@124 -- # [[ -n 66049 ]] 00:05:57.213 06:30:37 -- json_config/json_config.sh@127 -- # kill -SIGINT 66049 00:05:57.213 06:30:37 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:57.213 06:30:37 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:57.213 06:30:37 -- json_config/json_config.sh@130 -- # kill -0 66049 00:05:57.213 06:30:37 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:57.780 06:30:37 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:57.780 06:30:37 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:57.780 06:30:37 -- json_config/json_config.sh@130 -- # kill -0 66049 00:05:57.780 06:30:37 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:57.780 06:30:37 -- json_config/json_config.sh@132 -- # break 00:05:57.780 06:30:37 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:57.780 SPDK target shutdown done 00:05:57.780 06:30:37 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:57.780 INFO: relaunching applications... 00:05:57.780 06:30:37 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:57.780 06:30:37 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.780 06:30:37 -- json_config/json_config.sh@98 -- # local app=target 00:05:57.780 06:30:37 -- json_config/json_config.sh@99 -- # shift 00:05:57.780 06:30:37 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:57.780 06:30:37 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:57.780 06:30:37 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:57.780 06:30:37 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:57.780 06:30:37 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:57.780 06:30:37 -- json_config/json_config.sh@111 -- # app_pid[$app]=66244 00:05:57.780 Waiting for target to run... 00:05:57.780 06:30:37 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:57.780 06:30:37 -- json_config/json_config.sh@114 -- # waitforlisten 66244 /var/tmp/spdk_tgt.sock 00:05:57.780 06:30:37 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.780 06:30:37 -- common/autotest_common.sh@819 -- # '[' -z 66244 ']' 00:05:57.780 06:30:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.780 06:30:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.780 06:30:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.780 06:30:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.780 06:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:57.780 [2024-07-12 06:30:37.682780] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:57.781 [2024-07-12 06:30:37.682874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66244 ] 00:05:58.348 [2024-07-12 06:30:37.991971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.348 [2024-07-12 06:30:38.015272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.348 [2024-07-12 06:30:38.015462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.606 [2024-07-12 06:30:38.315465] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.606 [2024-07-12 06:30:38.347534] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:58.864 06:30:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.864 06:30:38 -- common/autotest_common.sh@852 -- # return 0 00:05:58.864 00:05:58.864 06:30:38 -- json_config/json_config.sh@115 -- # echo '' 00:05:58.864 06:30:38 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:58.864 INFO: Checking if target configuration is the same... 00:05:58.864 06:30:38 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:58.864 06:30:38 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:58.864 06:30:38 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:58.864 06:30:38 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:58.864 + '[' 2 -ne 2 ']' 00:05:58.864 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:58.864 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:58.864 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:58.864 +++ basename /dev/fd/62 00:05:58.864 ++ mktemp /tmp/62.XXX 00:05:58.864 + tmp_file_1=/tmp/62.2Qp 00:05:58.864 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:58.864 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:58.864 + tmp_file_2=/tmp/spdk_tgt_config.json.o6n 00:05:58.864 + ret=0 00:05:58.864 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:59.430 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:59.430 + diff -u /tmp/62.2Qp /tmp/spdk_tgt_config.json.o6n 00:05:59.430 INFO: JSON config files are the same 00:05:59.430 + echo 'INFO: JSON config files are the same' 00:05:59.430 + rm /tmp/62.2Qp /tmp/spdk_tgt_config.json.o6n 00:05:59.430 + exit 0 00:05:59.430 INFO: changing configuration and checking if this can be detected... 00:05:59.430 06:30:39 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:59.430 06:30:39 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:59.430 06:30:39 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:59.430 06:30:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:59.689 06:30:39 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:59.689 06:30:39 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:59.689 06:30:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.689 + '[' 2 -ne 2 ']' 00:05:59.690 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:59.690 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:59.690 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:59.690 +++ basename /dev/fd/62 00:05:59.690 ++ mktemp /tmp/62.XXX 00:05:59.690 + tmp_file_1=/tmp/62.bji 00:05:59.690 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:59.690 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:59.690 + tmp_file_2=/tmp/spdk_tgt_config.json.fTq 00:05:59.690 + ret=0 00:05:59.690 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:00.255 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:00.255 + diff -u /tmp/62.bji /tmp/spdk_tgt_config.json.fTq 00:06:00.255 + ret=1 00:06:00.255 + echo '=== Start of file: /tmp/62.bji ===' 00:06:00.255 + cat /tmp/62.bji 00:06:00.255 + echo '=== End of file: /tmp/62.bji ===' 00:06:00.255 + echo '' 00:06:00.255 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fTq ===' 00:06:00.255 + cat /tmp/spdk_tgt_config.json.fTq 00:06:00.255 + echo '=== End of file: /tmp/spdk_tgt_config.json.fTq ===' 00:06:00.255 + echo '' 00:06:00.255 + rm /tmp/62.bji /tmp/spdk_tgt_config.json.fTq 00:06:00.255 + exit 1 00:06:00.255 INFO: configuration change detected. 00:06:00.255 06:30:39 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:00.255 06:30:39 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:00.255 06:30:39 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:00.255 06:30:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:00.255 06:30:39 -- common/autotest_common.sh@10 -- # set +x 00:06:00.255 06:30:39 -- json_config/json_config.sh@360 -- # local ret=0 00:06:00.255 06:30:39 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:00.255 06:30:39 -- json_config/json_config.sh@370 -- # [[ -n 66244 ]] 00:06:00.255 06:30:39 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:00.255 06:30:39 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:00.255 06:30:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:00.255 06:30:39 -- common/autotest_common.sh@10 -- # set +x 00:06:00.255 06:30:39 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:00.255 06:30:39 -- json_config/json_config.sh@246 -- # uname -s 00:06:00.255 06:30:39 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:00.255 06:30:39 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:00.255 06:30:40 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:00.255 06:30:40 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:00.255 06:30:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:00.255 06:30:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.255 06:30:40 -- json_config/json_config.sh@376 -- # killprocess 66244 00:06:00.255 06:30:40 -- common/autotest_common.sh@926 -- # '[' -z 66244 ']' 00:06:00.255 06:30:40 -- common/autotest_common.sh@930 -- # kill -0 66244 00:06:00.255 06:30:40 -- common/autotest_common.sh@931 -- # uname 00:06:00.255 06:30:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:00.255 06:30:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66244 00:06:00.255 killing process with pid 66244 00:06:00.255 06:30:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:00.255 06:30:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:00.255 06:30:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66244' 00:06:00.255 06:30:40 -- common/autotest_common.sh@945 -- # kill 66244 00:06:00.255 06:30:40 -- common/autotest_common.sh@950 -- # wait 66244 00:06:00.514 06:30:40 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:00.514 06:30:40 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:00.514 06:30:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:00.514 06:30:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.514 INFO: Success 00:06:00.514 06:30:40 -- json_config/json_config.sh@381 -- # return 0 00:06:00.514 06:30:40 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:00.514 00:06:00.514 real 0m8.421s 00:06:00.514 user 0m12.472s 00:06:00.514 sys 0m1.425s 00:06:00.514 ************************************ 00:06:00.514 END TEST json_config 00:06:00.514 ************************************ 00:06:00.514 06:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.514 06:30:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.514 06:30:40 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:00.514 06:30:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.514 06:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.514 06:30:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.514 ************************************ 00:06:00.514 START TEST json_config_extra_key 00:06:00.514 ************************************ 00:06:00.514 06:30:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:00.514 06:30:40 -- nvmf/common.sh@7 -- # uname -s 00:06:00.514 06:30:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.514 06:30:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.514 06:30:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.514 06:30:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.514 06:30:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.514 06:30:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.514 06:30:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.514 06:30:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.514 06:30:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.514 06:30:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.514 06:30:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:06:00.514 06:30:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:06:00.514 06:30:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.514 06:30:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.514 06:30:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.514 06:30:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.514 06:30:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.514 06:30:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.514 06:30:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.514 06:30:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.514 06:30:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.514 06:30:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.514 06:30:40 -- paths/export.sh@5 -- # export PATH 00:06:00.514 06:30:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.514 06:30:40 -- nvmf/common.sh@46 -- # : 0 00:06:00.514 06:30:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:00.514 06:30:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:00.514 06:30:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:00.514 06:30:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.514 06:30:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.514 06:30:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:00.514 06:30:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:00.514 06:30:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:00.514 INFO: launching applications... 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66379 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:00.514 Waiting for target to run... 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:00.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:00.514 06:30:40 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66379 /var/tmp/spdk_tgt.sock 00:06:00.514 06:30:40 -- common/autotest_common.sh@819 -- # '[' -z 66379 ']' 00:06:00.514 06:30:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:00.514 06:30:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.514 06:30:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:00.514 06:30:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.514 06:30:40 -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 [2024-07-12 06:30:40.437193] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:00.773 [2024-07-12 06:30:40.437285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66379 ] 00:06:01.032 [2024-07-12 06:30:40.754056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.032 [2024-07-12 06:30:40.778864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.032 [2024-07-12 06:30:40.779078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.599 00:06:01.599 INFO: shutting down applications... 00:06:01.599 06:30:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.599 06:30:41 -- common/autotest_common.sh@852 -- # return 0 00:06:01.599 06:30:41 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:01.599 06:30:41 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:01.599 06:30:41 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:01.599 06:30:41 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:01.599 06:30:41 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:01.599 06:30:41 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66379 ]] 00:06:01.599 06:30:41 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66379 00:06:01.599 06:30:41 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:01.599 06:30:41 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:01.599 06:30:41 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66379 00:06:01.599 06:30:41 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:02.165 06:30:41 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:02.165 06:30:41 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:02.165 06:30:41 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66379 00:06:02.165 06:30:41 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:02.165 06:30:41 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:02.165 06:30:41 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:02.165 06:30:41 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:02.165 SPDK target shutdown done 00:06:02.165 06:30:41 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:02.165 Success 00:06:02.165 00:06:02.165 real 0m1.596s 00:06:02.165 user 0m1.413s 00:06:02.165 sys 0m0.296s 00:06:02.165 06:30:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.165 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.165 ************************************ 00:06:02.165 END TEST json_config_extra_key 00:06:02.165 ************************************ 00:06:02.166 06:30:41 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.166 06:30:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.166 06:30:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.166 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.166 ************************************ 00:06:02.166 START TEST alias_rpc 00:06:02.166 ************************************ 00:06:02.166 06:30:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.166 * Looking for test storage... 00:06:02.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:02.166 06:30:42 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.166 06:30:42 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66448 00:06:02.166 06:30:42 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66448 00:06:02.166 06:30:42 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.166 06:30:42 -- common/autotest_common.sh@819 -- # '[' -z 66448 ']' 00:06:02.166 06:30:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.166 06:30:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.166 06:30:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.166 06:30:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.166 06:30:42 -- common/autotest_common.sh@10 -- # set +x 00:06:02.166 [2024-07-12 06:30:42.081614] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:02.166 [2024-07-12 06:30:42.081926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66448 ] 00:06:02.423 [2024-07-12 06:30:42.226489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.423 [2024-07-12 06:30:42.266541] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.423 [2024-07-12 06:30:42.266732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.355 06:30:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.355 06:30:43 -- common/autotest_common.sh@852 -- # return 0 00:06:03.355 06:30:43 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:03.612 06:30:43 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66448 00:06:03.612 06:30:43 -- common/autotest_common.sh@926 -- # '[' -z 66448 ']' 00:06:03.612 06:30:43 -- common/autotest_common.sh@930 -- # kill -0 66448 00:06:03.612 06:30:43 -- common/autotest_common.sh@931 -- # uname 00:06:03.612 06:30:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:03.612 06:30:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66448 00:06:03.612 killing process with pid 66448 00:06:03.612 06:30:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:03.612 06:30:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:03.612 06:30:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66448' 00:06:03.612 06:30:43 -- common/autotest_common.sh@945 -- # kill 66448 00:06:03.612 06:30:43 -- common/autotest_common.sh@950 -- # wait 66448 00:06:03.870 ************************************ 00:06:03.870 END TEST alias_rpc 00:06:03.870 ************************************ 00:06:03.870 00:06:03.870 real 0m1.779s 00:06:03.870 user 0m2.226s 00:06:03.870 sys 0m0.359s 00:06:03.870 06:30:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.870 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:06:03.870 06:30:43 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:06:03.870 06:30:43 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:03.870 06:30:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.870 06:30:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.870 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:06:03.870 ************************************ 00:06:03.870 START TEST spdkcli_tcp 00:06:03.870 ************************************ 00:06:03.870 06:30:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:04.128 * Looking for test storage... 00:06:04.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:04.128 06:30:43 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:04.128 06:30:43 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:04.128 06:30:43 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:04.128 06:30:43 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:04.128 06:30:43 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:04.128 06:30:43 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:04.128 06:30:43 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:04.128 06:30:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:04.128 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.128 06:30:43 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66522 00:06:04.128 06:30:43 -- spdkcli/tcp.sh@27 -- # waitforlisten 66522 00:06:04.128 06:30:43 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:04.128 06:30:43 -- common/autotest_common.sh@819 -- # '[' -z 66522 ']' 00:06:04.128 06:30:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.128 06:30:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.128 06:30:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.128 06:30:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.128 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.128 [2024-07-12 06:30:43.885866] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:04.128 [2024-07-12 06:30:43.885974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66522 ] 00:06:04.128 [2024-07-12 06:30:44.020490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.386 [2024-07-12 06:30:44.061091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.386 [2024-07-12 06:30:44.061417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.386 [2024-07-12 06:30:44.061429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.343 06:30:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.343 06:30:44 -- common/autotest_common.sh@852 -- # return 0 00:06:05.343 06:30:44 -- spdkcli/tcp.sh@31 -- # socat_pid=66540 00:06:05.343 06:30:44 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:05.343 06:30:44 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:05.343 [ 00:06:05.343 "bdev_malloc_delete", 00:06:05.343 "bdev_malloc_create", 00:06:05.343 "bdev_null_resize", 00:06:05.343 "bdev_null_delete", 00:06:05.343 "bdev_null_create", 00:06:05.343 "bdev_nvme_cuse_unregister", 00:06:05.343 "bdev_nvme_cuse_register", 00:06:05.343 "bdev_opal_new_user", 00:06:05.343 "bdev_opal_set_lock_state", 00:06:05.343 "bdev_opal_delete", 00:06:05.343 "bdev_opal_get_info", 00:06:05.343 "bdev_opal_create", 00:06:05.343 "bdev_nvme_opal_revert", 00:06:05.343 "bdev_nvme_opal_init", 00:06:05.343 "bdev_nvme_send_cmd", 00:06:05.343 "bdev_nvme_get_path_iostat", 00:06:05.343 "bdev_nvme_get_mdns_discovery_info", 00:06:05.343 "bdev_nvme_stop_mdns_discovery", 00:06:05.343 "bdev_nvme_start_mdns_discovery", 00:06:05.343 "bdev_nvme_set_multipath_policy", 00:06:05.343 "bdev_nvme_set_preferred_path", 00:06:05.343 "bdev_nvme_get_io_paths", 00:06:05.343 "bdev_nvme_remove_error_injection", 00:06:05.343 "bdev_nvme_add_error_injection", 00:06:05.343 "bdev_nvme_get_discovery_info", 00:06:05.343 "bdev_nvme_stop_discovery", 00:06:05.343 "bdev_nvme_start_discovery", 00:06:05.343 "bdev_nvme_get_controller_health_info", 00:06:05.343 "bdev_nvme_disable_controller", 00:06:05.343 "bdev_nvme_enable_controller", 00:06:05.343 "bdev_nvme_reset_controller", 00:06:05.343 "bdev_nvme_get_transport_statistics", 00:06:05.343 "bdev_nvme_apply_firmware", 00:06:05.343 "bdev_nvme_detach_controller", 00:06:05.343 "bdev_nvme_get_controllers", 00:06:05.343 "bdev_nvme_attach_controller", 00:06:05.343 "bdev_nvme_set_hotplug", 00:06:05.343 "bdev_nvme_set_options", 00:06:05.343 "bdev_passthru_delete", 00:06:05.343 "bdev_passthru_create", 00:06:05.343 "bdev_lvol_grow_lvstore", 00:06:05.343 "bdev_lvol_get_lvols", 00:06:05.343 "bdev_lvol_get_lvstores", 00:06:05.343 "bdev_lvol_delete", 00:06:05.343 "bdev_lvol_set_read_only", 00:06:05.343 "bdev_lvol_resize", 00:06:05.343 "bdev_lvol_decouple_parent", 00:06:05.343 "bdev_lvol_inflate", 00:06:05.343 "bdev_lvol_rename", 00:06:05.343 "bdev_lvol_clone_bdev", 00:06:05.343 "bdev_lvol_clone", 00:06:05.343 "bdev_lvol_snapshot", 00:06:05.343 "bdev_lvol_create", 00:06:05.343 "bdev_lvol_delete_lvstore", 00:06:05.343 "bdev_lvol_rename_lvstore", 00:06:05.343 "bdev_lvol_create_lvstore", 00:06:05.343 "bdev_raid_set_options", 00:06:05.343 "bdev_raid_remove_base_bdev", 00:06:05.343 "bdev_raid_add_base_bdev", 00:06:05.343 "bdev_raid_delete", 00:06:05.343 "bdev_raid_create", 00:06:05.343 "bdev_raid_get_bdevs", 00:06:05.343 "bdev_error_inject_error", 00:06:05.343 "bdev_error_delete", 00:06:05.343 "bdev_error_create", 00:06:05.343 "bdev_split_delete", 00:06:05.343 "bdev_split_create", 00:06:05.343 "bdev_delay_delete", 00:06:05.343 "bdev_delay_create", 00:06:05.343 "bdev_delay_update_latency", 00:06:05.343 "bdev_zone_block_delete", 00:06:05.343 "bdev_zone_block_create", 00:06:05.343 "blobfs_create", 00:06:05.343 "blobfs_detect", 00:06:05.343 "blobfs_set_cache_size", 00:06:05.343 "bdev_aio_delete", 00:06:05.343 "bdev_aio_rescan", 00:06:05.343 "bdev_aio_create", 00:06:05.343 "bdev_ftl_set_property", 00:06:05.343 "bdev_ftl_get_properties", 00:06:05.343 "bdev_ftl_get_stats", 00:06:05.343 "bdev_ftl_unmap", 00:06:05.343 "bdev_ftl_unload", 00:06:05.343 "bdev_ftl_delete", 00:06:05.343 "bdev_ftl_load", 00:06:05.343 "bdev_ftl_create", 00:06:05.343 "bdev_virtio_attach_controller", 00:06:05.343 "bdev_virtio_scsi_get_devices", 00:06:05.343 "bdev_virtio_detach_controller", 00:06:05.343 "bdev_virtio_blk_set_hotplug", 00:06:05.343 "bdev_iscsi_delete", 00:06:05.343 "bdev_iscsi_create", 00:06:05.343 "bdev_iscsi_set_options", 00:06:05.343 "bdev_uring_delete", 00:06:05.343 "bdev_uring_create", 00:06:05.343 "accel_error_inject_error", 00:06:05.343 "ioat_scan_accel_module", 00:06:05.343 "dsa_scan_accel_module", 00:06:05.343 "iaa_scan_accel_module", 00:06:05.343 "iscsi_set_options", 00:06:05.343 "iscsi_get_auth_groups", 00:06:05.343 "iscsi_auth_group_remove_secret", 00:06:05.343 "iscsi_auth_group_add_secret", 00:06:05.343 "iscsi_delete_auth_group", 00:06:05.343 "iscsi_create_auth_group", 00:06:05.343 "iscsi_set_discovery_auth", 00:06:05.343 "iscsi_get_options", 00:06:05.343 "iscsi_target_node_request_logout", 00:06:05.343 "iscsi_target_node_set_redirect", 00:06:05.343 "iscsi_target_node_set_auth", 00:06:05.343 "iscsi_target_node_add_lun", 00:06:05.343 "iscsi_get_connections", 00:06:05.343 "iscsi_portal_group_set_auth", 00:06:05.343 "iscsi_start_portal_group", 00:06:05.344 "iscsi_delete_portal_group", 00:06:05.344 "iscsi_create_portal_group", 00:06:05.344 "iscsi_get_portal_groups", 00:06:05.344 "iscsi_delete_target_node", 00:06:05.344 "iscsi_target_node_remove_pg_ig_maps", 00:06:05.344 "iscsi_target_node_add_pg_ig_maps", 00:06:05.344 "iscsi_create_target_node", 00:06:05.344 "iscsi_get_target_nodes", 00:06:05.344 "iscsi_delete_initiator_group", 00:06:05.344 "iscsi_initiator_group_remove_initiators", 00:06:05.344 "iscsi_initiator_group_add_initiators", 00:06:05.344 "iscsi_create_initiator_group", 00:06:05.344 "iscsi_get_initiator_groups", 00:06:05.344 "nvmf_set_crdt", 00:06:05.344 "nvmf_set_config", 00:06:05.344 "nvmf_set_max_subsystems", 00:06:05.344 "nvmf_subsystem_get_listeners", 00:06:05.344 "nvmf_subsystem_get_qpairs", 00:06:05.344 "nvmf_subsystem_get_controllers", 00:06:05.344 "nvmf_get_stats", 00:06:05.344 "nvmf_get_transports", 00:06:05.344 "nvmf_create_transport", 00:06:05.344 "nvmf_get_targets", 00:06:05.344 "nvmf_delete_target", 00:06:05.344 "nvmf_create_target", 00:06:05.344 "nvmf_subsystem_allow_any_host", 00:06:05.344 "nvmf_subsystem_remove_host", 00:06:05.344 "nvmf_subsystem_add_host", 00:06:05.344 "nvmf_subsystem_remove_ns", 00:06:05.344 "nvmf_subsystem_add_ns", 00:06:05.344 "nvmf_subsystem_listener_set_ana_state", 00:06:05.344 "nvmf_discovery_get_referrals", 00:06:05.344 "nvmf_discovery_remove_referral", 00:06:05.344 "nvmf_discovery_add_referral", 00:06:05.344 "nvmf_subsystem_remove_listener", 00:06:05.344 "nvmf_subsystem_add_listener", 00:06:05.344 "nvmf_delete_subsystem", 00:06:05.344 "nvmf_create_subsystem", 00:06:05.344 "nvmf_get_subsystems", 00:06:05.344 "env_dpdk_get_mem_stats", 00:06:05.344 "nbd_get_disks", 00:06:05.344 "nbd_stop_disk", 00:06:05.344 "nbd_start_disk", 00:06:05.344 "ublk_recover_disk", 00:06:05.344 "ublk_get_disks", 00:06:05.344 "ublk_stop_disk", 00:06:05.344 "ublk_start_disk", 00:06:05.344 "ublk_destroy_target", 00:06:05.344 "ublk_create_target", 00:06:05.344 "virtio_blk_create_transport", 00:06:05.344 "virtio_blk_get_transports", 00:06:05.344 "vhost_controller_set_coalescing", 00:06:05.344 "vhost_get_controllers", 00:06:05.344 "vhost_delete_controller", 00:06:05.344 "vhost_create_blk_controller", 00:06:05.344 "vhost_scsi_controller_remove_target", 00:06:05.344 "vhost_scsi_controller_add_target", 00:06:05.344 "vhost_start_scsi_controller", 00:06:05.344 "vhost_create_scsi_controller", 00:06:05.344 "thread_set_cpumask", 00:06:05.344 "framework_get_scheduler", 00:06:05.344 "framework_set_scheduler", 00:06:05.344 "framework_get_reactors", 00:06:05.344 "thread_get_io_channels", 00:06:05.344 "thread_get_pollers", 00:06:05.344 "thread_get_stats", 00:06:05.344 "framework_monitor_context_switch", 00:06:05.344 "spdk_kill_instance", 00:06:05.344 "log_enable_timestamps", 00:06:05.344 "log_get_flags", 00:06:05.344 "log_clear_flag", 00:06:05.344 "log_set_flag", 00:06:05.344 "log_get_level", 00:06:05.344 "log_set_level", 00:06:05.344 "log_get_print_level", 00:06:05.344 "log_set_print_level", 00:06:05.344 "framework_enable_cpumask_locks", 00:06:05.344 "framework_disable_cpumask_locks", 00:06:05.344 "framework_wait_init", 00:06:05.344 "framework_start_init", 00:06:05.344 "scsi_get_devices", 00:06:05.344 "bdev_get_histogram", 00:06:05.344 "bdev_enable_histogram", 00:06:05.344 "bdev_set_qos_limit", 00:06:05.344 "bdev_set_qd_sampling_period", 00:06:05.344 "bdev_get_bdevs", 00:06:05.344 "bdev_reset_iostat", 00:06:05.344 "bdev_get_iostat", 00:06:05.344 "bdev_examine", 00:06:05.344 "bdev_wait_for_examine", 00:06:05.344 "bdev_set_options", 00:06:05.344 "notify_get_notifications", 00:06:05.344 "notify_get_types", 00:06:05.344 "accel_get_stats", 00:06:05.344 "accel_set_options", 00:06:05.344 "accel_set_driver", 00:06:05.344 "accel_crypto_key_destroy", 00:06:05.344 "accel_crypto_keys_get", 00:06:05.344 "accel_crypto_key_create", 00:06:05.344 "accel_assign_opc", 00:06:05.344 "accel_get_module_info", 00:06:05.344 "accel_get_opc_assignments", 00:06:05.344 "vmd_rescan", 00:06:05.344 "vmd_remove_device", 00:06:05.344 "vmd_enable", 00:06:05.344 "sock_set_default_impl", 00:06:05.344 "sock_impl_set_options", 00:06:05.344 "sock_impl_get_options", 00:06:05.344 "iobuf_get_stats", 00:06:05.344 "iobuf_set_options", 00:06:05.344 "framework_get_pci_devices", 00:06:05.344 "framework_get_config", 00:06:05.344 "framework_get_subsystems", 00:06:05.344 "trace_get_info", 00:06:05.344 "trace_get_tpoint_group_mask", 00:06:05.344 "trace_disable_tpoint_group", 00:06:05.344 "trace_enable_tpoint_group", 00:06:05.344 "trace_clear_tpoint_mask", 00:06:05.344 "trace_set_tpoint_mask", 00:06:05.344 "spdk_get_version", 00:06:05.344 "rpc_get_methods" 00:06:05.344 ] 00:06:05.344 06:30:45 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:05.344 06:30:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:05.344 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:06:05.344 06:30:45 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:05.344 06:30:45 -- spdkcli/tcp.sh@38 -- # killprocess 66522 00:06:05.344 06:30:45 -- common/autotest_common.sh@926 -- # '[' -z 66522 ']' 00:06:05.344 06:30:45 -- common/autotest_common.sh@930 -- # kill -0 66522 00:06:05.344 06:30:45 -- common/autotest_common.sh@931 -- # uname 00:06:05.344 06:30:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:05.344 06:30:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66522 00:06:05.344 06:30:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:05.344 06:30:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:05.344 killing process with pid 66522 00:06:05.344 06:30:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66522' 00:06:05.344 06:30:45 -- common/autotest_common.sh@945 -- # kill 66522 00:06:05.344 06:30:45 -- common/autotest_common.sh@950 -- # wait 66522 00:06:05.603 00:06:05.603 real 0m1.680s 00:06:05.603 user 0m3.367s 00:06:05.603 sys 0m0.368s 00:06:05.603 06:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.603 ************************************ 00:06:05.603 END TEST spdkcli_tcp 00:06:05.603 ************************************ 00:06:05.603 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:06:05.603 06:30:45 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.603 06:30:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.603 06:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.603 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:06:05.603 ************************************ 00:06:05.603 START TEST dpdk_mem_utility 00:06:05.603 ************************************ 00:06:05.603 06:30:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.863 * Looking for test storage... 00:06:05.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:05.863 06:30:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:05.863 06:30:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66602 00:06:05.863 06:30:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66602 00:06:05.863 06:30:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.863 06:30:45 -- common/autotest_common.sh@819 -- # '[' -z 66602 ']' 00:06:05.863 06:30:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.863 06:30:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:05.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.863 06:30:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.863 06:30:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:05.863 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:06:05.863 [2024-07-12 06:30:45.624905] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:05.863 [2024-07-12 06:30:45.625017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66602 ] 00:06:05.863 [2024-07-12 06:30:45.761972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.122 [2024-07-12 06:30:45.794834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.122 [2024-07-12 06:30:45.795058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.060 06:30:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.060 06:30:46 -- common/autotest_common.sh@852 -- # return 0 00:06:07.060 06:30:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:07.060 06:30:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:07.060 06:30:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.060 06:30:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.060 { 00:06:07.060 "filename": "/tmp/spdk_mem_dump.txt" 00:06:07.060 } 00:06:07.060 06:30:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.060 06:30:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:07.060 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:07.060 1 heaps totaling size 814.000000 MiB 00:06:07.060 size: 814.000000 MiB heap id: 0 00:06:07.060 end heaps---------- 00:06:07.060 8 mempools totaling size 598.116089 MiB 00:06:07.060 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:07.060 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:07.060 size: 84.521057 MiB name: bdev_io_66602 00:06:07.060 size: 51.011292 MiB name: evtpool_66602 00:06:07.060 size: 50.003479 MiB name: msgpool_66602 00:06:07.060 size: 21.763794 MiB name: PDU_Pool 00:06:07.060 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:07.060 size: 0.026123 MiB name: Session_Pool 00:06:07.060 end mempools------- 00:06:07.060 6 memzones totaling size 4.142822 MiB 00:06:07.060 size: 1.000366 MiB name: RG_ring_0_66602 00:06:07.060 size: 1.000366 MiB name: RG_ring_1_66602 00:06:07.060 size: 1.000366 MiB name: RG_ring_4_66602 00:06:07.060 size: 1.000366 MiB name: RG_ring_5_66602 00:06:07.060 size: 0.125366 MiB name: RG_ring_2_66602 00:06:07.060 size: 0.015991 MiB name: RG_ring_3_66602 00:06:07.060 end memzones------- 00:06:07.060 06:30:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:07.060 heap id: 0 total size: 814.000000 MiB number of busy elements: 310 number of free elements: 15 00:06:07.060 list of free elements. size: 12.470093 MiB 00:06:07.060 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:07.060 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:07.060 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:07.060 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:07.060 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:07.060 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:07.060 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:07.060 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:07.060 element at address: 0x200000200000 with size: 0.832825 MiB 00:06:07.060 element at address: 0x20001aa00000 with size: 0.567871 MiB 00:06:07.060 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:07.060 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:07.060 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:07.060 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:07.060 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:07.060 list of standard malloc elements. size: 199.267334 MiB 00:06:07.060 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:07.060 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:07.060 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:07.060 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:07.060 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:07.060 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:07.060 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:07.060 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:07.060 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:07.060 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:07.060 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:07.060 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:07.060 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:07.060 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:07.060 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:07.060 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:07.060 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:07.060 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:07.061 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:07.061 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:07.062 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:07.062 list of memzone associated elements. size: 602.262573 MiB 00:06:07.062 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:07.062 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:07.062 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:07.062 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:07.062 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:07.062 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66602_0 00:06:07.062 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:07.062 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66602_0 00:06:07.062 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:07.062 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66602_0 00:06:07.062 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:07.062 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:07.062 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:07.062 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:07.062 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:07.062 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66602 00:06:07.062 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:07.062 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66602 00:06:07.062 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:07.062 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66602 00:06:07.062 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:07.062 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:07.062 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:07.062 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:07.062 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:07.062 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:07.062 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:07.062 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:07.062 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:07.062 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66602 00:06:07.062 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:07.062 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66602 00:06:07.062 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:07.062 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66602 00:06:07.062 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:07.062 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66602 00:06:07.062 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:07.062 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66602 00:06:07.062 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:07.062 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:07.062 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:07.062 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:07.062 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:07.062 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:07.062 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:07.062 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66602 00:06:07.062 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:07.062 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:07.062 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:07.062 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:07.062 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:07.062 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66602 00:06:07.062 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:07.062 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:07.062 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:07.062 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66602 00:06:07.062 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:07.062 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66602 00:06:07.062 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:07.062 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:07.062 06:30:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:07.062 06:30:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66602 00:06:07.062 06:30:46 -- common/autotest_common.sh@926 -- # '[' -z 66602 ']' 00:06:07.062 06:30:46 -- common/autotest_common.sh@930 -- # kill -0 66602 00:06:07.062 06:30:46 -- common/autotest_common.sh@931 -- # uname 00:06:07.062 06:30:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:07.062 06:30:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66602 00:06:07.062 06:30:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:07.062 06:30:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:07.062 killing process with pid 66602 00:06:07.062 06:30:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66602' 00:06:07.062 06:30:46 -- common/autotest_common.sh@945 -- # kill 66602 00:06:07.062 06:30:46 -- common/autotest_common.sh@950 -- # wait 66602 00:06:07.322 00:06:07.322 real 0m1.533s 00:06:07.322 user 0m1.814s 00:06:07.322 sys 0m0.309s 00:06:07.322 06:30:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.322 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:06:07.322 ************************************ 00:06:07.322 END TEST dpdk_mem_utility 00:06:07.322 ************************************ 00:06:07.322 06:30:47 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:07.322 06:30:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.322 06:30:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.322 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:06:07.322 ************************************ 00:06:07.322 START TEST event 00:06:07.322 ************************************ 00:06:07.322 06:30:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:07.322 * Looking for test storage... 00:06:07.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:07.322 06:30:47 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:07.322 06:30:47 -- bdev/nbd_common.sh@6 -- # set -e 00:06:07.322 06:30:47 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.322 06:30:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:07.322 06:30:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.322 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:06:07.322 ************************************ 00:06:07.322 START TEST event_perf 00:06:07.322 ************************************ 00:06:07.322 06:30:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.322 Running I/O for 1 seconds...[2024-07-12 06:30:47.194498] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:07.322 [2024-07-12 06:30:47.194577] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66678 ] 00:06:07.580 [2024-07-12 06:30:47.334173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.580 Running I/O for 1 seconds...[2024-07-12 06:30:47.373126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.580 [2024-07-12 06:30:47.373248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.580 [2024-07-12 06:30:47.373303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.580 [2024-07-12 06:30:47.373308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.517 00:06:08.517 lcore 0: 179349 00:06:08.517 lcore 1: 179350 00:06:08.517 lcore 2: 179350 00:06:08.517 lcore 3: 179350 00:06:08.517 done. 00:06:08.517 00:06:08.517 real 0m1.255s 00:06:08.517 user 0m4.076s 00:06:08.517 sys 0m0.057s 00:06:08.517 06:30:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.517 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:06:08.517 ************************************ 00:06:08.517 END TEST event_perf 00:06:08.517 ************************************ 00:06:08.775 06:30:48 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:08.775 06:30:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:08.775 06:30:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.775 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:06:08.775 ************************************ 00:06:08.775 START TEST event_reactor 00:06:08.775 ************************************ 00:06:08.775 06:30:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:08.775 [2024-07-12 06:30:48.494598] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:08.775 [2024-07-12 06:30:48.494693] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66711 ] 00:06:08.775 [2024-07-12 06:30:48.631313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.775 [2024-07-12 06:30:48.662267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.150 test_start 00:06:10.150 oneshot 00:06:10.150 tick 100 00:06:10.150 tick 100 00:06:10.150 tick 250 00:06:10.150 tick 100 00:06:10.150 tick 100 00:06:10.150 tick 250 00:06:10.150 tick 500 00:06:10.150 tick 100 00:06:10.150 tick 100 00:06:10.150 tick 100 00:06:10.150 tick 250 00:06:10.150 tick 100 00:06:10.150 tick 100 00:06:10.150 test_end 00:06:10.150 00:06:10.150 real 0m1.230s 00:06:10.150 user 0m1.088s 00:06:10.150 sys 0m0.036s 00:06:10.150 06:30:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.150 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:06:10.150 ************************************ 00:06:10.150 END TEST event_reactor 00:06:10.150 ************************************ 00:06:10.150 06:30:49 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.150 06:30:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:10.150 06:30:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.150 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:06:10.150 ************************************ 00:06:10.150 START TEST event_reactor_perf 00:06:10.150 ************************************ 00:06:10.150 06:30:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.150 [2024-07-12 06:30:49.776125] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:10.150 [2024-07-12 06:30:49.776216] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66741 ] 00:06:10.150 [2024-07-12 06:30:49.911070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.150 [2024-07-12 06:30:49.939944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.084 test_start 00:06:11.084 test_end 00:06:11.084 Performance: 376155 events per second 00:06:11.084 00:06:11.084 real 0m1.231s 00:06:11.084 user 0m1.090s 00:06:11.084 sys 0m0.035s 00:06:11.084 06:30:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.084 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:06:11.084 ************************************ 00:06:11.084 END TEST event_reactor_perf 00:06:11.084 ************************************ 00:06:11.344 06:30:51 -- event/event.sh@49 -- # uname -s 00:06:11.344 06:30:51 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:11.344 06:30:51 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:11.344 06:30:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.344 06:30:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.344 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.344 ************************************ 00:06:11.344 START TEST event_scheduler 00:06:11.344 ************************************ 00:06:11.344 06:30:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:11.344 * Looking for test storage... 00:06:11.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:11.344 06:30:51 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:11.344 06:30:51 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66807 00:06:11.344 06:30:51 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.344 06:30:51 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:11.344 06:30:51 -- scheduler/scheduler.sh@37 -- # waitforlisten 66807 00:06:11.344 06:30:51 -- common/autotest_common.sh@819 -- # '[' -z 66807 ']' 00:06:11.344 06:30:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.344 06:30:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.344 06:30:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.344 06:30:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.344 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.344 [2024-07-12 06:30:51.169202] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:11.344 [2024-07-12 06:30:51.169328] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66807 ] 00:06:11.603 [2024-07-12 06:30:51.310303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.603 [2024-07-12 06:30:51.355328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.603 [2024-07-12 06:30:51.358008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.603 [2024-07-12 06:30:51.358106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.603 [2024-07-12 06:30:51.358115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.603 06:30:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.603 06:30:51 -- common/autotest_common.sh@852 -- # return 0 00:06:11.603 06:30:51 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:11.603 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.603 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.603 POWER: Env isn't set yet! 00:06:11.603 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:11.603 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.603 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.603 POWER: Attempting to initialise PSTAT power management... 00:06:11.603 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.603 POWER: Cannot set governor of lcore 0 to performance 00:06:11.603 POWER: Attempting to initialise AMD PSTATE power management... 00:06:11.603 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.603 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.603 POWER: Attempting to initialise CPPC power management... 00:06:11.603 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.603 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.603 POWER: Attempting to initialise VM power management... 00:06:11.603 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:11.603 POWER: Unable to set Power Management Environment for lcore 0 00:06:11.603 [2024-07-12 06:30:51.420171] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:11.603 [2024-07-12 06:30:51.420192] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:11.603 [2024-07-12 06:30:51.420214] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:11.603 [2024-07-12 06:30:51.420236] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:11.603 [2024-07-12 06:30:51.420249] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:11.603 [2024-07-12 06:30:51.420263] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:11.603 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.603 06:30:51 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:11.603 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.603 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.603 [2024-07-12 06:30:51.476017] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:11.603 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.603 06:30:51 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:11.603 06:30:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.603 06:30:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.603 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.603 ************************************ 00:06:11.603 START TEST scheduler_create_thread 00:06:11.603 ************************************ 00:06:11.603 06:30:51 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:11.603 06:30:51 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:11.603 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.603 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.603 2 00:06:11.603 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.603 06:30:51 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:11.603 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.603 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.603 3 00:06:11.603 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.603 06:30:51 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:11.603 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.603 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.603 4 00:06:11.603 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.603 06:30:51 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:11.603 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.603 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.862 5 00:06:11.862 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.862 06:30:51 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:11.862 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.862 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.862 6 00:06:11.862 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.862 06:30:51 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:11.862 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.862 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.862 7 00:06:11.862 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.862 06:30:51 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:11.862 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.862 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.862 8 00:06:11.862 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.862 06:30:51 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:11.862 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.862 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.862 9 00:06:11.862 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.862 06:30:51 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:11.862 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.862 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.862 10 00:06:11.862 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.862 06:30:51 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:11.862 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.862 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.862 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.862 06:30:51 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:11.862 06:30:51 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:11.862 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.862 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.862 06:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.862 06:30:51 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:11.862 06:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.862 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.237 06:30:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.237 06:30:53 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:13.237 06:30:53 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:13.237 06:30:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.237 06:30:53 -- common/autotest_common.sh@10 -- # set +x 00:06:14.611 06:30:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:14.611 00:06:14.611 real 0m2.612s 00:06:14.611 user 0m0.013s 00:06:14.611 sys 0m0.008s 00:06:14.611 06:30:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.611 06:30:54 -- common/autotest_common.sh@10 -- # set +x 00:06:14.611 ************************************ 00:06:14.612 END TEST scheduler_create_thread 00:06:14.612 ************************************ 00:06:14.612 06:30:54 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:14.612 06:30:54 -- scheduler/scheduler.sh@46 -- # killprocess 66807 00:06:14.612 06:30:54 -- common/autotest_common.sh@926 -- # '[' -z 66807 ']' 00:06:14.612 06:30:54 -- common/autotest_common.sh@930 -- # kill -0 66807 00:06:14.612 06:30:54 -- common/autotest_common.sh@931 -- # uname 00:06:14.612 06:30:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:14.612 06:30:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66807 00:06:14.612 06:30:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:14.612 killing process with pid 66807 00:06:14.612 06:30:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:14.612 06:30:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66807' 00:06:14.612 06:30:54 -- common/autotest_common.sh@945 -- # kill 66807 00:06:14.612 06:30:54 -- common/autotest_common.sh@950 -- # wait 66807 00:06:14.871 [2024-07-12 06:30:54.579970] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:14.871 00:06:14.871 real 0m3.699s 00:06:14.871 user 0m5.535s 00:06:14.871 sys 0m0.262s 00:06:14.871 06:30:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.871 06:30:54 -- common/autotest_common.sh@10 -- # set +x 00:06:14.871 ************************************ 00:06:14.871 END TEST event_scheduler 00:06:14.871 ************************************ 00:06:14.871 06:30:54 -- event/event.sh@51 -- # modprobe -n nbd 00:06:14.871 06:30:54 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:14.871 06:30:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.871 06:30:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.167 06:30:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.167 ************************************ 00:06:15.167 START TEST app_repeat 00:06:15.167 ************************************ 00:06:15.167 06:30:54 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:15.167 06:30:54 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.167 06:30:54 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.167 06:30:54 -- event/event.sh@13 -- # local nbd_list 00:06:15.167 06:30:54 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.167 06:30:54 -- event/event.sh@14 -- # local bdev_list 00:06:15.167 06:30:54 -- event/event.sh@15 -- # local repeat_times=4 00:06:15.167 06:30:54 -- event/event.sh@17 -- # modprobe nbd 00:06:15.167 06:30:54 -- event/event.sh@19 -- # repeat_pid=66888 00:06:15.167 06:30:54 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:15.167 06:30:54 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.167 Process app_repeat pid: 66888 00:06:15.167 06:30:54 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 66888' 00:06:15.167 06:30:54 -- event/event.sh@23 -- # for i in {0..2} 00:06:15.167 spdk_app_start Round 0 00:06:15.167 06:30:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:15.167 06:30:54 -- event/event.sh@25 -- # waitforlisten 66888 /var/tmp/spdk-nbd.sock 00:06:15.167 06:30:54 -- common/autotest_common.sh@819 -- # '[' -z 66888 ']' 00:06:15.167 06:30:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.167 06:30:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.167 06:30:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.167 06:30:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.167 06:30:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.167 [2024-07-12 06:30:54.829595] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:15.167 [2024-07-12 06:30:54.829675] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66888 ] 00:06:15.167 [2024-07-12 06:30:54.964773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.167 [2024-07-12 06:30:55.004896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.167 [2024-07-12 06:30:55.004906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.426 06:30:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.426 06:30:55 -- common/autotest_common.sh@852 -- # return 0 00:06:15.426 06:30:55 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.684 Malloc0 00:06:15.684 06:30:55 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.942 Malloc1 00:06:15.942 06:30:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@12 -- # local i 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.942 06:30:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.201 /dev/nbd0 00:06:16.201 06:30:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.201 06:30:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.201 06:30:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:16.201 06:30:55 -- common/autotest_common.sh@857 -- # local i 00:06:16.201 06:30:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:16.201 06:30:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:16.201 06:30:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:16.201 06:30:55 -- common/autotest_common.sh@861 -- # break 00:06:16.201 06:30:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:16.201 06:30:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:16.201 06:30:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.201 1+0 records in 00:06:16.201 1+0 records out 00:06:16.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035489 s, 11.5 MB/s 00:06:16.201 06:30:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.201 06:30:55 -- common/autotest_common.sh@874 -- # size=4096 00:06:16.201 06:30:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.201 06:30:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:16.201 06:30:55 -- common/autotest_common.sh@877 -- # return 0 00:06:16.201 06:30:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.201 06:30:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.201 06:30:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.459 /dev/nbd1 00:06:16.459 06:30:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.459 06:30:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.459 06:30:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:16.459 06:30:56 -- common/autotest_common.sh@857 -- # local i 00:06:16.459 06:30:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:16.459 06:30:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:16.459 06:30:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:16.459 06:30:56 -- common/autotest_common.sh@861 -- # break 00:06:16.459 06:30:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:16.459 06:30:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:16.459 06:30:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.459 1+0 records in 00:06:16.459 1+0 records out 00:06:16.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343656 s, 11.9 MB/s 00:06:16.459 06:30:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.459 06:30:56 -- common/autotest_common.sh@874 -- # size=4096 00:06:16.459 06:30:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.459 06:30:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:16.459 06:30:56 -- common/autotest_common.sh@877 -- # return 0 00:06:16.459 06:30:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.459 06:30:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.460 06:30:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.460 06:30:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.460 06:30:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd0", 00:06:16.718 "bdev_name": "Malloc0" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd1", 00:06:16.718 "bdev_name": "Malloc1" 00:06:16.718 } 00:06:16.718 ]' 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd0", 00:06:16.718 "bdev_name": "Malloc0" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd1", 00:06:16.718 "bdev_name": "Malloc1" 00:06:16.718 } 00:06:16.718 ]' 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.718 /dev/nbd1' 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.718 /dev/nbd1' 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.718 256+0 records in 00:06:16.718 256+0 records out 00:06:16.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00734058 s, 143 MB/s 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.718 256+0 records in 00:06:16.718 256+0 records out 00:06:16.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273763 s, 38.3 MB/s 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.718 06:30:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.977 256+0 records in 00:06:16.977 256+0 records out 00:06:16.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268001 s, 39.1 MB/s 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@51 -- # local i 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.977 06:30:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.235 06:30:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.235 06:30:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.235 06:30:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.235 06:30:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.235 06:30:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.235 06:30:56 -- bdev/nbd_common.sh@41 -- # break 00:06:17.235 06:30:56 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.235 06:30:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.235 06:30:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.493 06:30:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.493 06:30:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.493 06:30:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.493 06:30:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.493 06:30:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.493 06:30:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.493 06:30:57 -- bdev/nbd_common.sh@41 -- # break 00:06:17.493 06:30:57 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.493 06:30:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.493 06:30:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.493 06:30:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@65 -- # true 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.751 06:30:57 -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.751 06:30:57 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.010 06:30:57 -- event/event.sh@35 -- # sleep 3 00:06:18.010 [2024-07-12 06:30:57.912682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.269 [2024-07-12 06:30:57.949854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.269 [2024-07-12 06:30:57.949864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.269 [2024-07-12 06:30:57.981491] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.269 [2024-07-12 06:30:57.981549] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.552 06:31:00 -- event/event.sh@23 -- # for i in {0..2} 00:06:21.552 spdk_app_start Round 1 00:06:21.552 06:31:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:21.552 06:31:00 -- event/event.sh@25 -- # waitforlisten 66888 /var/tmp/spdk-nbd.sock 00:06:21.552 06:31:00 -- common/autotest_common.sh@819 -- # '[' -z 66888 ']' 00:06:21.552 06:31:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.552 06:31:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.552 06:31:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.552 06:31:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.552 06:31:00 -- common/autotest_common.sh@10 -- # set +x 00:06:21.552 06:31:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.552 06:31:01 -- common/autotest_common.sh@852 -- # return 0 00:06:21.552 06:31:01 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.552 Malloc0 00:06:21.552 06:31:01 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.120 Malloc1 00:06:22.120 06:31:01 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@12 -- # local i 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.120 06:31:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.377 /dev/nbd0 00:06:22.377 06:31:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.377 06:31:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.377 06:31:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:22.377 06:31:02 -- common/autotest_common.sh@857 -- # local i 00:06:22.377 06:31:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:22.377 06:31:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:22.377 06:31:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:22.377 06:31:02 -- common/autotest_common.sh@861 -- # break 00:06:22.377 06:31:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:22.377 06:31:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:22.377 06:31:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.377 1+0 records in 00:06:22.377 1+0 records out 00:06:22.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271885 s, 15.1 MB/s 00:06:22.377 06:31:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.377 06:31:02 -- common/autotest_common.sh@874 -- # size=4096 00:06:22.377 06:31:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.377 06:31:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:22.377 06:31:02 -- common/autotest_common.sh@877 -- # return 0 00:06:22.377 06:31:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.377 06:31:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.377 06:31:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.633 /dev/nbd1 00:06:22.633 06:31:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.633 06:31:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.633 06:31:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:22.633 06:31:02 -- common/autotest_common.sh@857 -- # local i 00:06:22.633 06:31:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:22.633 06:31:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:22.633 06:31:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:22.633 06:31:02 -- common/autotest_common.sh@861 -- # break 00:06:22.633 06:31:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:22.633 06:31:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:22.633 06:31:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.633 1+0 records in 00:06:22.633 1+0 records out 00:06:22.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00202684 s, 2.0 MB/s 00:06:22.891 06:31:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.891 06:31:02 -- common/autotest_common.sh@874 -- # size=4096 00:06:22.891 06:31:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.891 06:31:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:22.891 06:31:02 -- common/autotest_common.sh@877 -- # return 0 00:06:22.891 06:31:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.891 06:31:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.891 06:31:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.891 06:31:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.891 06:31:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.169 { 00:06:23.169 "nbd_device": "/dev/nbd0", 00:06:23.169 "bdev_name": "Malloc0" 00:06:23.169 }, 00:06:23.169 { 00:06:23.169 "nbd_device": "/dev/nbd1", 00:06:23.169 "bdev_name": "Malloc1" 00:06:23.169 } 00:06:23.169 ]' 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.169 { 00:06:23.169 "nbd_device": "/dev/nbd0", 00:06:23.169 "bdev_name": "Malloc0" 00:06:23.169 }, 00:06:23.169 { 00:06:23.169 "nbd_device": "/dev/nbd1", 00:06:23.169 "bdev_name": "Malloc1" 00:06:23.169 } 00:06:23.169 ]' 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.169 /dev/nbd1' 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.169 /dev/nbd1' 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.169 256+0 records in 00:06:23.169 256+0 records out 00:06:23.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00746383 s, 140 MB/s 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.169 256+0 records in 00:06:23.169 256+0 records out 00:06:23.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243704 s, 43.0 MB/s 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.169 256+0 records in 00:06:23.169 256+0 records out 00:06:23.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0350641 s, 29.9 MB/s 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.169 06:31:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@51 -- # local i 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.170 06:31:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.427 06:31:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.427 06:31:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.427 06:31:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.427 06:31:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.427 06:31:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.427 06:31:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.427 06:31:03 -- bdev/nbd_common.sh@41 -- # break 00:06:23.427 06:31:03 -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.427 06:31:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.427 06:31:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.684 06:31:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.684 06:31:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.684 06:31:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.684 06:31:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.684 06:31:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.684 06:31:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.684 06:31:03 -- bdev/nbd_common.sh@41 -- # break 00:06:23.684 06:31:03 -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.684 06:31:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.684 06:31:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.684 06:31:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.941 06:31:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.941 06:31:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.941 06:31:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.223 06:31:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.223 06:31:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.223 06:31:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.223 06:31:03 -- bdev/nbd_common.sh@65 -- # true 00:06:24.223 06:31:03 -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.223 06:31:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.223 06:31:03 -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.223 06:31:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.223 06:31:03 -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.223 06:31:03 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.509 06:31:04 -- event/event.sh@35 -- # sleep 3 00:06:24.509 [2024-07-12 06:31:04.274352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.509 [2024-07-12 06:31:04.309568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.509 [2024-07-12 06:31:04.309582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.509 [2024-07-12 06:31:04.340109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.509 [2024-07-12 06:31:04.340197] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.792 spdk_app_start Round 2 00:06:27.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.792 06:31:07 -- event/event.sh@23 -- # for i in {0..2} 00:06:27.792 06:31:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:27.792 06:31:07 -- event/event.sh@25 -- # waitforlisten 66888 /var/tmp/spdk-nbd.sock 00:06:27.792 06:31:07 -- common/autotest_common.sh@819 -- # '[' -z 66888 ']' 00:06:27.792 06:31:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.792 06:31:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.792 06:31:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.792 06:31:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.792 06:31:07 -- common/autotest_common.sh@10 -- # set +x 00:06:27.792 06:31:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.792 06:31:07 -- common/autotest_common.sh@852 -- # return 0 00:06:27.792 06:31:07 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.792 Malloc0 00:06:27.792 06:31:07 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.359 Malloc1 00:06:28.359 06:31:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@12 -- # local i 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.359 06:31:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.618 /dev/nbd0 00:06:28.618 06:31:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.618 06:31:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.618 06:31:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:28.618 06:31:08 -- common/autotest_common.sh@857 -- # local i 00:06:28.618 06:31:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:28.618 06:31:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:28.618 06:31:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:28.618 06:31:08 -- common/autotest_common.sh@861 -- # break 00:06:28.618 06:31:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:28.618 06:31:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:28.618 06:31:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.618 1+0 records in 00:06:28.618 1+0 records out 00:06:28.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302724 s, 13.5 MB/s 00:06:28.618 06:31:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.618 06:31:08 -- common/autotest_common.sh@874 -- # size=4096 00:06:28.618 06:31:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.618 06:31:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:28.618 06:31:08 -- common/autotest_common.sh@877 -- # return 0 00:06:28.618 06:31:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.618 06:31:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.618 06:31:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.876 /dev/nbd1 00:06:28.876 06:31:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.876 06:31:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.876 06:31:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:28.876 06:31:08 -- common/autotest_common.sh@857 -- # local i 00:06:28.876 06:31:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:28.876 06:31:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:28.876 06:31:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:28.876 06:31:08 -- common/autotest_common.sh@861 -- # break 00:06:28.877 06:31:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:28.877 06:31:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:28.877 06:31:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.877 1+0 records in 00:06:28.877 1+0 records out 00:06:28.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388424 s, 10.5 MB/s 00:06:28.877 06:31:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.877 06:31:08 -- common/autotest_common.sh@874 -- # size=4096 00:06:28.877 06:31:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.877 06:31:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:28.877 06:31:08 -- common/autotest_common.sh@877 -- # return 0 00:06:28.877 06:31:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.877 06:31:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.877 06:31:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.877 06:31:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.877 06:31:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.135 { 00:06:29.135 "nbd_device": "/dev/nbd0", 00:06:29.135 "bdev_name": "Malloc0" 00:06:29.135 }, 00:06:29.135 { 00:06:29.135 "nbd_device": "/dev/nbd1", 00:06:29.135 "bdev_name": "Malloc1" 00:06:29.135 } 00:06:29.135 ]' 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.135 { 00:06:29.135 "nbd_device": "/dev/nbd0", 00:06:29.135 "bdev_name": "Malloc0" 00:06:29.135 }, 00:06:29.135 { 00:06:29.135 "nbd_device": "/dev/nbd1", 00:06:29.135 "bdev_name": "Malloc1" 00:06:29.135 } 00:06:29.135 ]' 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.135 /dev/nbd1' 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.135 /dev/nbd1' 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.135 256+0 records in 00:06:29.135 256+0 records out 00:06:29.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00903708 s, 116 MB/s 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.135 256+0 records in 00:06:29.135 256+0 records out 00:06:29.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289765 s, 36.2 MB/s 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.135 06:31:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.135 256+0 records in 00:06:29.135 256+0 records out 00:06:29.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264659 s, 39.6 MB/s 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@51 -- # local i 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.135 06:31:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.701 06:31:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.701 06:31:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.701 06:31:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.701 06:31:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.701 06:31:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.701 06:31:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.701 06:31:09 -- bdev/nbd_common.sh@41 -- # break 00:06:29.701 06:31:09 -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.701 06:31:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.701 06:31:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@41 -- # break 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.958 06:31:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.216 06:31:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.216 06:31:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.216 06:31:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.216 06:31:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.216 06:31:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.216 06:31:09 -- bdev/nbd_common.sh@65 -- # true 00:06:30.216 06:31:09 -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.216 06:31:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.216 06:31:09 -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.216 06:31:09 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.216 06:31:09 -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.216 06:31:09 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.473 06:31:10 -- event/event.sh@35 -- # sleep 3 00:06:30.473 [2024-07-12 06:31:10.291579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.473 [2024-07-12 06:31:10.327684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.473 [2024-07-12 06:31:10.327692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.473 [2024-07-12 06:31:10.358141] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.473 [2024-07-12 06:31:10.358203] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.749 06:31:13 -- event/event.sh@38 -- # waitforlisten 66888 /var/tmp/spdk-nbd.sock 00:06:33.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.749 06:31:13 -- common/autotest_common.sh@819 -- # '[' -z 66888 ']' 00:06:33.749 06:31:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.749 06:31:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.749 06:31:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.749 06:31:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.749 06:31:13 -- common/autotest_common.sh@10 -- # set +x 00:06:33.749 06:31:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.749 06:31:13 -- common/autotest_common.sh@852 -- # return 0 00:06:33.749 06:31:13 -- event/event.sh@39 -- # killprocess 66888 00:06:33.749 06:31:13 -- common/autotest_common.sh@926 -- # '[' -z 66888 ']' 00:06:33.749 06:31:13 -- common/autotest_common.sh@930 -- # kill -0 66888 00:06:33.749 06:31:13 -- common/autotest_common.sh@931 -- # uname 00:06:33.749 06:31:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:33.749 06:31:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66888 00:06:33.749 killing process with pid 66888 00:06:33.749 06:31:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:33.749 06:31:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:33.749 06:31:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66888' 00:06:33.749 06:31:13 -- common/autotest_common.sh@945 -- # kill 66888 00:06:33.749 06:31:13 -- common/autotest_common.sh@950 -- # wait 66888 00:06:33.749 spdk_app_start is called in Round 0. 00:06:33.749 Shutdown signal received, stop current app iteration 00:06:33.749 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:33.749 spdk_app_start is called in Round 1. 00:06:33.749 Shutdown signal received, stop current app iteration 00:06:33.749 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:33.749 spdk_app_start is called in Round 2. 00:06:33.749 Shutdown signal received, stop current app iteration 00:06:33.749 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:33.749 spdk_app_start is called in Round 3. 00:06:33.749 Shutdown signal received, stop current app iteration 00:06:33.749 06:31:13 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:33.749 06:31:13 -- event/event.sh@42 -- # return 0 00:06:33.749 00:06:33.749 real 0m18.819s 00:06:33.749 user 0m43.110s 00:06:33.749 sys 0m2.713s 00:06:33.749 06:31:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.749 06:31:13 -- common/autotest_common.sh@10 -- # set +x 00:06:33.749 ************************************ 00:06:33.749 END TEST app_repeat 00:06:33.749 ************************************ 00:06:33.749 06:31:13 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:33.749 06:31:13 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:33.749 06:31:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:33.749 06:31:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.749 06:31:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.007 ************************************ 00:06:34.007 START TEST cpu_locks 00:06:34.007 ************************************ 00:06:34.007 06:31:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:34.007 * Looking for test storage... 00:06:34.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:34.007 06:31:13 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:34.007 06:31:13 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:34.007 06:31:13 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:34.007 06:31:13 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:34.007 06:31:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:34.008 06:31:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.008 06:31:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.008 ************************************ 00:06:34.008 START TEST default_locks 00:06:34.008 ************************************ 00:06:34.008 06:31:13 -- common/autotest_common.sh@1104 -- # default_locks 00:06:34.008 06:31:13 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67318 00:06:34.008 06:31:13 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.008 06:31:13 -- event/cpu_locks.sh@47 -- # waitforlisten 67318 00:06:34.008 06:31:13 -- common/autotest_common.sh@819 -- # '[' -z 67318 ']' 00:06:34.008 06:31:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.008 06:31:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.008 06:31:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.008 06:31:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.008 06:31:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.008 [2024-07-12 06:31:13.828904] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:34.008 [2024-07-12 06:31:13.829033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67318 ] 00:06:34.267 [2024-07-12 06:31:13.969154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.267 [2024-07-12 06:31:14.005669] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.267 [2024-07-12 06:31:14.005843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.202 06:31:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.202 06:31:14 -- common/autotest_common.sh@852 -- # return 0 00:06:35.202 06:31:14 -- event/cpu_locks.sh@49 -- # locks_exist 67318 00:06:35.202 06:31:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.202 06:31:14 -- event/cpu_locks.sh@22 -- # lslocks -p 67318 00:06:35.460 06:31:15 -- event/cpu_locks.sh@50 -- # killprocess 67318 00:06:35.460 06:31:15 -- common/autotest_common.sh@926 -- # '[' -z 67318 ']' 00:06:35.460 06:31:15 -- common/autotest_common.sh@930 -- # kill -0 67318 00:06:35.460 06:31:15 -- common/autotest_common.sh@931 -- # uname 00:06:35.460 06:31:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.460 06:31:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67318 00:06:35.460 06:31:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.460 killing process with pid 67318 00:06:35.460 06:31:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.460 06:31:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67318' 00:06:35.460 06:31:15 -- common/autotest_common.sh@945 -- # kill 67318 00:06:35.460 06:31:15 -- common/autotest_common.sh@950 -- # wait 67318 00:06:35.718 06:31:15 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67318 00:06:35.718 06:31:15 -- common/autotest_common.sh@640 -- # local es=0 00:06:35.718 06:31:15 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 67318 00:06:35.718 06:31:15 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:35.718 06:31:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.718 06:31:15 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:35.718 06:31:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.718 06:31:15 -- common/autotest_common.sh@643 -- # waitforlisten 67318 00:06:35.718 06:31:15 -- common/autotest_common.sh@819 -- # '[' -z 67318 ']' 00:06:35.718 06:31:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.718 06:31:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.718 06:31:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.718 06:31:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.718 06:31:15 -- common/autotest_common.sh@10 -- # set +x 00:06:35.718 ERROR: process (pid: 67318) is no longer running 00:06:35.718 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (67318) - No such process 00:06:35.719 06:31:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.719 06:31:15 -- common/autotest_common.sh@852 -- # return 1 00:06:35.719 06:31:15 -- common/autotest_common.sh@643 -- # es=1 00:06:35.719 06:31:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:35.719 06:31:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:35.719 06:31:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:35.719 06:31:15 -- event/cpu_locks.sh@54 -- # no_locks 00:06:35.719 06:31:15 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.719 06:31:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.719 06:31:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.719 00:06:35.719 real 0m1.664s 00:06:35.719 user 0m1.898s 00:06:35.719 sys 0m0.433s 00:06:35.719 06:31:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.719 ************************************ 00:06:35.719 END TEST default_locks 00:06:35.719 ************************************ 00:06:35.719 06:31:15 -- common/autotest_common.sh@10 -- # set +x 00:06:35.719 06:31:15 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:35.719 06:31:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.719 06:31:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.719 06:31:15 -- common/autotest_common.sh@10 -- # set +x 00:06:35.719 ************************************ 00:06:35.719 START TEST default_locks_via_rpc 00:06:35.719 ************************************ 00:06:35.719 06:31:15 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:35.719 06:31:15 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67370 00:06:35.719 06:31:15 -- event/cpu_locks.sh@63 -- # waitforlisten 67370 00:06:35.719 06:31:15 -- common/autotest_common.sh@819 -- # '[' -z 67370 ']' 00:06:35.719 06:31:15 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.719 06:31:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.719 06:31:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.719 06:31:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.719 06:31:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.719 06:31:15 -- common/autotest_common.sh@10 -- # set +x 00:06:35.719 [2024-07-12 06:31:15.518876] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:35.719 [2024-07-12 06:31:15.519030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67370 ] 00:06:35.977 [2024-07-12 06:31:15.660388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.977 [2024-07-12 06:31:15.700208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.977 [2024-07-12 06:31:15.700397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.912 06:31:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.912 06:31:16 -- common/autotest_common.sh@852 -- # return 0 00:06:36.912 06:31:16 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:36.912 06:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:36.912 06:31:16 -- common/autotest_common.sh@10 -- # set +x 00:06:36.912 06:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:36.912 06:31:16 -- event/cpu_locks.sh@67 -- # no_locks 00:06:36.912 06:31:16 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.912 06:31:16 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.912 06:31:16 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.912 06:31:16 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.912 06:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:36.912 06:31:16 -- common/autotest_common.sh@10 -- # set +x 00:06:36.912 06:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:36.912 06:31:16 -- event/cpu_locks.sh@71 -- # locks_exist 67370 00:06:36.912 06:31:16 -- event/cpu_locks.sh@22 -- # lslocks -p 67370 00:06:36.912 06:31:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.171 06:31:16 -- event/cpu_locks.sh@73 -- # killprocess 67370 00:06:37.172 06:31:16 -- common/autotest_common.sh@926 -- # '[' -z 67370 ']' 00:06:37.172 06:31:16 -- common/autotest_common.sh@930 -- # kill -0 67370 00:06:37.172 06:31:16 -- common/autotest_common.sh@931 -- # uname 00:06:37.172 06:31:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.172 06:31:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67370 00:06:37.172 killing process with pid 67370 00:06:37.172 06:31:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.172 06:31:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.172 06:31:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67370' 00:06:37.172 06:31:16 -- common/autotest_common.sh@945 -- # kill 67370 00:06:37.172 06:31:16 -- common/autotest_common.sh@950 -- # wait 67370 00:06:37.431 ************************************ 00:06:37.431 END TEST default_locks_via_rpc 00:06:37.431 ************************************ 00:06:37.431 00:06:37.431 real 0m1.753s 00:06:37.431 user 0m2.044s 00:06:37.431 sys 0m0.457s 00:06:37.431 06:31:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.431 06:31:17 -- common/autotest_common.sh@10 -- # set +x 00:06:37.431 06:31:17 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:37.431 06:31:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.431 06:31:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.431 06:31:17 -- common/autotest_common.sh@10 -- # set +x 00:06:37.431 ************************************ 00:06:37.431 START TEST non_locking_app_on_locked_coremask 00:06:37.431 ************************************ 00:06:37.431 06:31:17 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:37.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.431 06:31:17 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67421 00:06:37.431 06:31:17 -- event/cpu_locks.sh@81 -- # waitforlisten 67421 /var/tmp/spdk.sock 00:06:37.431 06:31:17 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.431 06:31:17 -- common/autotest_common.sh@819 -- # '[' -z 67421 ']' 00:06:37.431 06:31:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.431 06:31:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.431 06:31:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.431 06:31:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.431 06:31:17 -- common/autotest_common.sh@10 -- # set +x 00:06:37.431 [2024-07-12 06:31:17.315425] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:37.431 [2024-07-12 06:31:17.315508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67421 ] 00:06:37.689 [2024-07-12 06:31:17.450752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.689 [2024-07-12 06:31:17.497480] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.689 [2024-07-12 06:31:17.497744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.625 06:31:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.625 06:31:18 -- common/autotest_common.sh@852 -- # return 0 00:06:38.625 06:31:18 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:38.625 06:31:18 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67437 00:06:38.625 06:31:18 -- event/cpu_locks.sh@85 -- # waitforlisten 67437 /var/tmp/spdk2.sock 00:06:38.625 06:31:18 -- common/autotest_common.sh@819 -- # '[' -z 67437 ']' 00:06:38.625 06:31:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.625 06:31:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.625 06:31:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.625 06:31:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.625 06:31:18 -- common/autotest_common.sh@10 -- # set +x 00:06:38.625 [2024-07-12 06:31:18.393653] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:38.625 [2024-07-12 06:31:18.394005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67437 ] 00:06:38.625 [2024-07-12 06:31:18.538350] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.625 [2024-07-12 06:31:18.538409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.884 [2024-07-12 06:31:18.604071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.884 [2024-07-12 06:31:18.604232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.820 06:31:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.820 06:31:19 -- common/autotest_common.sh@852 -- # return 0 00:06:39.820 06:31:19 -- event/cpu_locks.sh@87 -- # locks_exist 67421 00:06:39.820 06:31:19 -- event/cpu_locks.sh@22 -- # lslocks -p 67421 00:06:39.820 06:31:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.401 06:31:20 -- event/cpu_locks.sh@89 -- # killprocess 67421 00:06:40.401 06:31:20 -- common/autotest_common.sh@926 -- # '[' -z 67421 ']' 00:06:40.401 06:31:20 -- common/autotest_common.sh@930 -- # kill -0 67421 00:06:40.401 06:31:20 -- common/autotest_common.sh@931 -- # uname 00:06:40.401 06:31:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.401 06:31:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67421 00:06:40.401 killing process with pid 67421 00:06:40.401 06:31:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:40.401 06:31:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:40.401 06:31:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67421' 00:06:40.401 06:31:20 -- common/autotest_common.sh@945 -- # kill 67421 00:06:40.401 06:31:20 -- common/autotest_common.sh@950 -- # wait 67421 00:06:40.968 06:31:20 -- event/cpu_locks.sh@90 -- # killprocess 67437 00:06:40.968 06:31:20 -- common/autotest_common.sh@926 -- # '[' -z 67437 ']' 00:06:40.968 06:31:20 -- common/autotest_common.sh@930 -- # kill -0 67437 00:06:40.968 06:31:20 -- common/autotest_common.sh@931 -- # uname 00:06:40.968 06:31:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.968 06:31:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67437 00:06:40.968 killing process with pid 67437 00:06:40.968 06:31:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:40.968 06:31:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:40.968 06:31:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67437' 00:06:40.968 06:31:20 -- common/autotest_common.sh@945 -- # kill 67437 00:06:40.968 06:31:20 -- common/autotest_common.sh@950 -- # wait 67437 00:06:41.228 00:06:41.228 real 0m3.636s 00:06:41.228 user 0m4.364s 00:06:41.228 sys 0m0.888s 00:06:41.228 06:31:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.228 06:31:20 -- common/autotest_common.sh@10 -- # set +x 00:06:41.228 ************************************ 00:06:41.228 END TEST non_locking_app_on_locked_coremask 00:06:41.228 ************************************ 00:06:41.228 06:31:20 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.228 06:31:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.228 06:31:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.228 06:31:20 -- common/autotest_common.sh@10 -- # set +x 00:06:41.228 ************************************ 00:06:41.228 START TEST locking_app_on_unlocked_coremask 00:06:41.228 ************************************ 00:06:41.228 06:31:20 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:41.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.228 06:31:20 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67493 00:06:41.228 06:31:20 -- event/cpu_locks.sh@99 -- # waitforlisten 67493 /var/tmp/spdk.sock 00:06:41.228 06:31:20 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.228 06:31:20 -- common/autotest_common.sh@819 -- # '[' -z 67493 ']' 00:06:41.228 06:31:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.228 06:31:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.228 06:31:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.228 06:31:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.228 06:31:20 -- common/autotest_common.sh@10 -- # set +x 00:06:41.228 [2024-07-12 06:31:21.007574] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:41.228 [2024-07-12 06:31:21.007681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67493 ] 00:06:41.487 [2024-07-12 06:31:21.148558] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.487 [2024-07-12 06:31:21.148626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.487 [2024-07-12 06:31:21.182784] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.487 [2024-07-12 06:31:21.182980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.054 06:31:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:42.054 06:31:21 -- common/autotest_common.sh@852 -- # return 0 00:06:42.054 06:31:21 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67509 00:06:42.054 06:31:21 -- event/cpu_locks.sh@103 -- # waitforlisten 67509 /var/tmp/spdk2.sock 00:06:42.054 06:31:21 -- common/autotest_common.sh@819 -- # '[' -z 67509 ']' 00:06:42.054 06:31:21 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.054 06:31:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.054 06:31:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.054 06:31:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.054 06:31:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.054 06:31:21 -- common/autotest_common.sh@10 -- # set +x 00:06:42.313 [2024-07-12 06:31:21.998663] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:42.313 [2024-07-12 06:31:21.998753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67509 ] 00:06:42.313 [2024-07-12 06:31:22.143789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.313 [2024-07-12 06:31:22.209547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.313 [2024-07-12 06:31:22.209713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.251 06:31:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.251 06:31:22 -- common/autotest_common.sh@852 -- # return 0 00:06:43.251 06:31:22 -- event/cpu_locks.sh@105 -- # locks_exist 67509 00:06:43.251 06:31:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.251 06:31:22 -- event/cpu_locks.sh@22 -- # lslocks -p 67509 00:06:44.184 06:31:23 -- event/cpu_locks.sh@107 -- # killprocess 67493 00:06:44.184 06:31:23 -- common/autotest_common.sh@926 -- # '[' -z 67493 ']' 00:06:44.184 06:31:23 -- common/autotest_common.sh@930 -- # kill -0 67493 00:06:44.184 06:31:23 -- common/autotest_common.sh@931 -- # uname 00:06:44.184 06:31:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:44.184 06:31:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67493 00:06:44.184 killing process with pid 67493 00:06:44.184 06:31:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:44.184 06:31:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:44.184 06:31:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67493' 00:06:44.184 06:31:23 -- common/autotest_common.sh@945 -- # kill 67493 00:06:44.184 06:31:23 -- common/autotest_common.sh@950 -- # wait 67493 00:06:44.443 06:31:24 -- event/cpu_locks.sh@108 -- # killprocess 67509 00:06:44.443 06:31:24 -- common/autotest_common.sh@926 -- # '[' -z 67509 ']' 00:06:44.443 06:31:24 -- common/autotest_common.sh@930 -- # kill -0 67509 00:06:44.443 06:31:24 -- common/autotest_common.sh@931 -- # uname 00:06:44.443 06:31:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:44.443 06:31:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67509 00:06:44.443 killing process with pid 67509 00:06:44.443 06:31:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:44.443 06:31:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:44.443 06:31:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67509' 00:06:44.443 06:31:24 -- common/autotest_common.sh@945 -- # kill 67509 00:06:44.443 06:31:24 -- common/autotest_common.sh@950 -- # wait 67509 00:06:44.702 ************************************ 00:06:44.702 END TEST locking_app_on_unlocked_coremask 00:06:44.702 ************************************ 00:06:44.702 00:06:44.702 real 0m3.604s 00:06:44.702 user 0m4.252s 00:06:44.702 sys 0m0.865s 00:06:44.702 06:31:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.702 06:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:44.702 06:31:24 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:44.702 06:31:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.702 06:31:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.702 06:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:44.702 ************************************ 00:06:44.702 START TEST locking_app_on_locked_coremask 00:06:44.702 ************************************ 00:06:44.702 06:31:24 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:44.702 06:31:24 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.702 06:31:24 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67577 00:06:44.702 06:31:24 -- event/cpu_locks.sh@116 -- # waitforlisten 67577 /var/tmp/spdk.sock 00:06:44.702 06:31:24 -- common/autotest_common.sh@819 -- # '[' -z 67577 ']' 00:06:44.702 06:31:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.702 06:31:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:44.702 06:31:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.702 06:31:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:44.702 06:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:44.961 [2024-07-12 06:31:24.682740] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:44.961 [2024-07-12 06:31:24.682900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67577 ] 00:06:44.961 [2024-07-12 06:31:24.828611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.961 [2024-07-12 06:31:24.863208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.961 [2024-07-12 06:31:24.863373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.896 06:31:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:45.896 06:31:25 -- common/autotest_common.sh@852 -- # return 0 00:06:45.896 06:31:25 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67593 00:06:45.896 06:31:25 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:45.896 06:31:25 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67593 /var/tmp/spdk2.sock 00:06:45.896 06:31:25 -- common/autotest_common.sh@640 -- # local es=0 00:06:45.896 06:31:25 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 67593 /var/tmp/spdk2.sock 00:06:45.896 06:31:25 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:45.896 06:31:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.896 06:31:25 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:45.896 06:31:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.896 06:31:25 -- common/autotest_common.sh@643 -- # waitforlisten 67593 /var/tmp/spdk2.sock 00:06:45.896 06:31:25 -- common/autotest_common.sh@819 -- # '[' -z 67593 ']' 00:06:45.896 06:31:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.896 06:31:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.896 06:31:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.896 06:31:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.896 06:31:25 -- common/autotest_common.sh@10 -- # set +x 00:06:45.896 [2024-07-12 06:31:25.707449] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:45.896 [2024-07-12 06:31:25.707796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67593 ] 00:06:46.154 [2024-07-12 06:31:25.850502] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67577 has claimed it. 00:06:46.154 [2024-07-12 06:31:25.850587] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.720 ERROR: process (pid: 67593) is no longer running 00:06:46.720 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (67593) - No such process 00:06:46.720 06:31:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.720 06:31:26 -- common/autotest_common.sh@852 -- # return 1 00:06:46.720 06:31:26 -- common/autotest_common.sh@643 -- # es=1 00:06:46.720 06:31:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.720 06:31:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:46.720 06:31:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.720 06:31:26 -- event/cpu_locks.sh@122 -- # locks_exist 67577 00:06:46.720 06:31:26 -- event/cpu_locks.sh@22 -- # lslocks -p 67577 00:06:46.720 06:31:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.979 06:31:26 -- event/cpu_locks.sh@124 -- # killprocess 67577 00:06:46.979 06:31:26 -- common/autotest_common.sh@926 -- # '[' -z 67577 ']' 00:06:46.979 06:31:26 -- common/autotest_common.sh@930 -- # kill -0 67577 00:06:46.979 06:31:26 -- common/autotest_common.sh@931 -- # uname 00:06:46.979 06:31:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:46.979 06:31:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67577 00:06:46.979 killing process with pid 67577 00:06:46.979 06:31:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:46.979 06:31:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:46.979 06:31:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67577' 00:06:46.979 06:31:26 -- common/autotest_common.sh@945 -- # kill 67577 00:06:46.979 06:31:26 -- common/autotest_common.sh@950 -- # wait 67577 00:06:47.238 00:06:47.238 real 0m2.460s 00:06:47.238 user 0m2.976s 00:06:47.238 sys 0m0.521s 00:06:47.238 06:31:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.238 06:31:27 -- common/autotest_common.sh@10 -- # set +x 00:06:47.238 ************************************ 00:06:47.238 END TEST locking_app_on_locked_coremask 00:06:47.238 ************************************ 00:06:47.238 06:31:27 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.238 06:31:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.238 06:31:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.238 06:31:27 -- common/autotest_common.sh@10 -- # set +x 00:06:47.238 ************************************ 00:06:47.238 START TEST locking_overlapped_coremask 00:06:47.238 ************************************ 00:06:47.238 06:31:27 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:47.238 06:31:27 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67633 00:06:47.238 06:31:27 -- event/cpu_locks.sh@133 -- # waitforlisten 67633 /var/tmp/spdk.sock 00:06:47.238 06:31:27 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.238 06:31:27 -- common/autotest_common.sh@819 -- # '[' -z 67633 ']' 00:06:47.238 06:31:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.238 06:31:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:47.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.238 06:31:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.238 06:31:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:47.238 06:31:27 -- common/autotest_common.sh@10 -- # set +x 00:06:47.497 [2024-07-12 06:31:27.164006] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:47.497 [2024-07-12 06:31:27.164102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67633 ] 00:06:47.497 [2024-07-12 06:31:27.298736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.497 [2024-07-12 06:31:27.335247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.497 [2024-07-12 06:31:27.335861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.497 [2024-07-12 06:31:27.335780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.497 [2024-07-12 06:31:27.335856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.450 06:31:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.450 06:31:28 -- common/autotest_common.sh@852 -- # return 0 00:06:48.450 06:31:28 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67653 00:06:48.450 06:31:28 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67653 /var/tmp/spdk2.sock 00:06:48.450 06:31:28 -- common/autotest_common.sh@640 -- # local es=0 00:06:48.450 06:31:28 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.450 06:31:28 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 67653 /var/tmp/spdk2.sock 00:06:48.450 06:31:28 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:48.450 06:31:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:48.450 06:31:28 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:48.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.450 06:31:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:48.450 06:31:28 -- common/autotest_common.sh@643 -- # waitforlisten 67653 /var/tmp/spdk2.sock 00:06:48.450 06:31:28 -- common/autotest_common.sh@819 -- # '[' -z 67653 ']' 00:06:48.450 06:31:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.450 06:31:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:48.450 06:31:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.450 06:31:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:48.450 06:31:28 -- common/autotest_common.sh@10 -- # set +x 00:06:48.450 [2024-07-12 06:31:28.192443] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:48.450 [2024-07-12 06:31:28.192526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67653 ] 00:06:48.450 [2024-07-12 06:31:28.342454] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67633 has claimed it. 00:06:48.450 [2024-07-12 06:31:28.342529] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.385 ERROR: process (pid: 67653) is no longer running 00:06:49.385 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (67653) - No such process 00:06:49.385 06:31:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:49.385 06:31:28 -- common/autotest_common.sh@852 -- # return 1 00:06:49.385 06:31:28 -- common/autotest_common.sh@643 -- # es=1 00:06:49.385 06:31:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:49.385 06:31:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:49.385 06:31:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:49.385 06:31:28 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.385 06:31:28 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.385 06:31:28 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.385 06:31:28 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.385 06:31:28 -- event/cpu_locks.sh@141 -- # killprocess 67633 00:06:49.385 06:31:28 -- common/autotest_common.sh@926 -- # '[' -z 67633 ']' 00:06:49.385 06:31:28 -- common/autotest_common.sh@930 -- # kill -0 67633 00:06:49.385 06:31:28 -- common/autotest_common.sh@931 -- # uname 00:06:49.385 06:31:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:49.385 06:31:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67633 00:06:49.385 06:31:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:49.385 06:31:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:49.385 06:31:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67633' 00:06:49.385 killing process with pid 67633 00:06:49.385 06:31:28 -- common/autotest_common.sh@945 -- # kill 67633 00:06:49.385 06:31:28 -- common/autotest_common.sh@950 -- # wait 67633 00:06:49.385 00:06:49.385 real 0m2.121s 00:06:49.385 user 0m6.249s 00:06:49.385 sys 0m0.314s 00:06:49.385 06:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.385 ************************************ 00:06:49.385 END TEST locking_overlapped_coremask 00:06:49.385 ************************************ 00:06:49.385 06:31:29 -- common/autotest_common.sh@10 -- # set +x 00:06:49.385 06:31:29 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.385 06:31:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.385 06:31:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.385 06:31:29 -- common/autotest_common.sh@10 -- # set +x 00:06:49.385 ************************************ 00:06:49.385 START TEST locking_overlapped_coremask_via_rpc 00:06:49.385 ************************************ 00:06:49.385 06:31:29 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:49.385 06:31:29 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67693 00:06:49.385 06:31:29 -- event/cpu_locks.sh@149 -- # waitforlisten 67693 /var/tmp/spdk.sock 00:06:49.385 06:31:29 -- common/autotest_common.sh@819 -- # '[' -z 67693 ']' 00:06:49.385 06:31:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.385 06:31:29 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.385 06:31:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:49.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.385 06:31:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.385 06:31:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:49.385 06:31:29 -- common/autotest_common.sh@10 -- # set +x 00:06:49.643 [2024-07-12 06:31:29.335883] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:49.643 [2024-07-12 06:31:29.336009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67693 ] 00:06:49.643 [2024-07-12 06:31:29.476408] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.643 [2024-07-12 06:31:29.476471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.643 [2024-07-12 06:31:29.518286] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:49.643 [2024-07-12 06:31:29.518784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.643 [2024-07-12 06:31:29.518890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.643 [2024-07-12 06:31:29.518896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.575 06:31:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:50.575 06:31:30 -- common/autotest_common.sh@852 -- # return 0 00:06:50.575 06:31:30 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67711 00:06:50.575 06:31:30 -- event/cpu_locks.sh@153 -- # waitforlisten 67711 /var/tmp/spdk2.sock 00:06:50.575 06:31:30 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.575 06:31:30 -- common/autotest_common.sh@819 -- # '[' -z 67711 ']' 00:06:50.575 06:31:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.575 06:31:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:50.575 06:31:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.575 06:31:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:50.575 06:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:50.575 [2024-07-12 06:31:30.407438] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:50.575 [2024-07-12 06:31:30.407736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67711 ] 00:06:50.832 [2024-07-12 06:31:30.560392] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.832 [2024-07-12 06:31:30.560446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.832 [2024-07-12 06:31:30.631801] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.832 [2024-07-12 06:31:30.632093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.832 [2024-07-12 06:31:30.634061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.832 [2024-07-12 06:31:30.634064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.761 06:31:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.761 06:31:31 -- common/autotest_common.sh@852 -- # return 0 00:06:51.761 06:31:31 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.761 06:31:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.761 06:31:31 -- common/autotest_common.sh@10 -- # set +x 00:06:51.761 06:31:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.761 06:31:31 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.761 06:31:31 -- common/autotest_common.sh@640 -- # local es=0 00:06:51.761 06:31:31 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.761 06:31:31 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:51.761 06:31:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:51.761 06:31:31 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:51.761 06:31:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:51.761 06:31:31 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.761 06:31:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.761 06:31:31 -- common/autotest_common.sh@10 -- # set +x 00:06:51.761 [2024-07-12 06:31:31.416109] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67693 has claimed it. 00:06:51.761 request: 00:06:51.761 { 00:06:51.761 "method": "framework_enable_cpumask_locks", 00:06:51.761 "req_id": 1 00:06:51.761 } 00:06:51.761 Got JSON-RPC error response 00:06:51.761 response: 00:06:51.761 { 00:06:51.761 "code": -32603, 00:06:51.761 "message": "Failed to claim CPU core: 2" 00:06:51.761 } 00:06:51.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.761 06:31:31 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:51.761 06:31:31 -- common/autotest_common.sh@643 -- # es=1 00:06:51.761 06:31:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:51.761 06:31:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:51.761 06:31:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:51.761 06:31:31 -- event/cpu_locks.sh@158 -- # waitforlisten 67693 /var/tmp/spdk.sock 00:06:51.761 06:31:31 -- common/autotest_common.sh@819 -- # '[' -z 67693 ']' 00:06:51.761 06:31:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.761 06:31:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.761 06:31:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.761 06:31:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.761 06:31:31 -- common/autotest_common.sh@10 -- # set +x 00:06:52.018 06:31:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.018 06:31:31 -- common/autotest_common.sh@852 -- # return 0 00:06:52.018 06:31:31 -- event/cpu_locks.sh@159 -- # waitforlisten 67711 /var/tmp/spdk2.sock 00:06:52.018 06:31:31 -- common/autotest_common.sh@819 -- # '[' -z 67711 ']' 00:06:52.018 06:31:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.018 06:31:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:52.018 06:31:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.018 06:31:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:52.018 06:31:31 -- common/autotest_common.sh@10 -- # set +x 00:06:52.276 06:31:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.276 06:31:32 -- common/autotest_common.sh@852 -- # return 0 00:06:52.276 06:31:32 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:52.276 06:31:32 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.276 06:31:32 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.276 ************************************ 00:06:52.276 END TEST locking_overlapped_coremask_via_rpc 00:06:52.276 ************************************ 00:06:52.276 06:31:32 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.276 00:06:52.276 real 0m2.774s 00:06:52.276 user 0m1.501s 00:06:52.276 sys 0m0.188s 00:06:52.276 06:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.276 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:52.276 06:31:32 -- event/cpu_locks.sh@174 -- # cleanup 00:06:52.276 06:31:32 -- event/cpu_locks.sh@15 -- # [[ -z 67693 ]] 00:06:52.276 06:31:32 -- event/cpu_locks.sh@15 -- # killprocess 67693 00:06:52.276 06:31:32 -- common/autotest_common.sh@926 -- # '[' -z 67693 ']' 00:06:52.276 06:31:32 -- common/autotest_common.sh@930 -- # kill -0 67693 00:06:52.276 06:31:32 -- common/autotest_common.sh@931 -- # uname 00:06:52.276 06:31:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:52.276 06:31:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67693 00:06:52.276 killing process with pid 67693 00:06:52.276 06:31:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:52.276 06:31:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:52.276 06:31:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67693' 00:06:52.276 06:31:32 -- common/autotest_common.sh@945 -- # kill 67693 00:06:52.276 06:31:32 -- common/autotest_common.sh@950 -- # wait 67693 00:06:52.534 06:31:32 -- event/cpu_locks.sh@16 -- # [[ -z 67711 ]] 00:06:52.534 06:31:32 -- event/cpu_locks.sh@16 -- # killprocess 67711 00:06:52.534 06:31:32 -- common/autotest_common.sh@926 -- # '[' -z 67711 ']' 00:06:52.534 06:31:32 -- common/autotest_common.sh@930 -- # kill -0 67711 00:06:52.534 06:31:32 -- common/autotest_common.sh@931 -- # uname 00:06:52.534 06:31:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:52.534 06:31:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67711 00:06:52.534 killing process with pid 67711 00:06:52.534 06:31:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:52.534 06:31:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:52.534 06:31:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67711' 00:06:52.534 06:31:32 -- common/autotest_common.sh@945 -- # kill 67711 00:06:52.534 06:31:32 -- common/autotest_common.sh@950 -- # wait 67711 00:06:52.792 06:31:32 -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.792 Process with pid 67693 is not found 00:06:52.792 06:31:32 -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.792 06:31:32 -- event/cpu_locks.sh@15 -- # [[ -z 67693 ]] 00:06:52.792 06:31:32 -- event/cpu_locks.sh@15 -- # killprocess 67693 00:06:52.792 06:31:32 -- common/autotest_common.sh@926 -- # '[' -z 67693 ']' 00:06:52.792 06:31:32 -- common/autotest_common.sh@930 -- # kill -0 67693 00:06:52.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (67693) - No such process 00:06:52.792 06:31:32 -- common/autotest_common.sh@953 -- # echo 'Process with pid 67693 is not found' 00:06:52.792 06:31:32 -- event/cpu_locks.sh@16 -- # [[ -z 67711 ]] 00:06:52.792 06:31:32 -- event/cpu_locks.sh@16 -- # killprocess 67711 00:06:52.792 06:31:32 -- common/autotest_common.sh@926 -- # '[' -z 67711 ']' 00:06:52.792 Process with pid 67711 is not found 00:06:52.792 06:31:32 -- common/autotest_common.sh@930 -- # kill -0 67711 00:06:52.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (67711) - No such process 00:06:52.792 06:31:32 -- common/autotest_common.sh@953 -- # echo 'Process with pid 67711 is not found' 00:06:52.792 06:31:32 -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.792 ************************************ 00:06:52.792 END TEST cpu_locks 00:06:52.792 ************************************ 00:06:52.792 00:06:52.792 real 0m18.939s 00:06:52.792 user 0m35.681s 00:06:52.792 sys 0m4.288s 00:06:52.792 06:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.792 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:52.792 ************************************ 00:06:52.792 END TEST event 00:06:52.792 ************************************ 00:06:52.792 00:06:52.792 real 0m45.573s 00:06:52.792 user 1m30.709s 00:06:52.792 sys 0m7.630s 00:06:52.792 06:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.792 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:52.792 06:31:32 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.792 06:31:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.792 06:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.792 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:52.792 ************************************ 00:06:52.792 START TEST thread 00:06:52.792 ************************************ 00:06:52.792 06:31:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:53.049 * Looking for test storage... 00:06:53.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:53.049 06:31:32 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.049 06:31:32 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:53.049 06:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.049 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:53.049 ************************************ 00:06:53.049 START TEST thread_poller_perf 00:06:53.049 ************************************ 00:06:53.049 06:31:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.049 [2024-07-12 06:31:32.801703] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:53.049 [2024-07-12 06:31:32.801856] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67833 ] 00:06:53.049 [2024-07-12 06:31:32.941834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.306 [2024-07-12 06:31:32.980369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.306 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.239 ====================================== 00:06:54.239 busy:2211126150 (cyc) 00:06:54.239 total_run_count: 271000 00:06:54.239 tsc_hz: 2200000000 (cyc) 00:06:54.239 ====================================== 00:06:54.239 poller_cost: 8159 (cyc), 3708 (nsec) 00:06:54.239 00:06:54.239 real 0m1.263s 00:06:54.239 user 0m1.115s 00:06:54.239 sys 0m0.037s 00:06:54.239 06:31:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.239 06:31:34 -- common/autotest_common.sh@10 -- # set +x 00:06:54.239 ************************************ 00:06:54.239 END TEST thread_poller_perf 00:06:54.239 ************************************ 00:06:54.239 06:31:34 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.239 06:31:34 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:54.239 06:31:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.239 06:31:34 -- common/autotest_common.sh@10 -- # set +x 00:06:54.239 ************************************ 00:06:54.239 START TEST thread_poller_perf 00:06:54.239 ************************************ 00:06:54.239 06:31:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.239 [2024-07-12 06:31:34.110361] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:54.239 [2024-07-12 06:31:34.110468] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67869 ] 00:06:54.497 [2024-07-12 06:31:34.253509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.497 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:54.497 [2024-07-12 06:31:34.287518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.478 ====================================== 00:06:55.478 busy:2202736126 (cyc) 00:06:55.478 total_run_count: 3924000 00:06:55.478 tsc_hz: 2200000000 (cyc) 00:06:55.478 ====================================== 00:06:55.478 poller_cost: 561 (cyc), 255 (nsec) 00:06:55.478 00:06:55.478 real 0m1.252s 00:06:55.478 user 0m1.103s 00:06:55.478 sys 0m0.040s 00:06:55.478 06:31:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.478 ************************************ 00:06:55.478 END TEST thread_poller_perf 00:06:55.479 ************************************ 00:06:55.479 06:31:35 -- common/autotest_common.sh@10 -- # set +x 00:06:55.479 06:31:35 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.479 ************************************ 00:06:55.479 END TEST thread 00:06:55.479 ************************************ 00:06:55.479 00:06:55.479 real 0m2.686s 00:06:55.479 user 0m2.275s 00:06:55.479 sys 0m0.189s 00:06:55.479 06:31:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.479 06:31:35 -- common/autotest_common.sh@10 -- # set +x 00:06:55.737 06:31:35 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:55.737 06:31:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.737 06:31:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.737 06:31:35 -- common/autotest_common.sh@10 -- # set +x 00:06:55.737 ************************************ 00:06:55.737 START TEST accel 00:06:55.737 ************************************ 00:06:55.737 06:31:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:55.737 * Looking for test storage... 00:06:55.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:55.737 06:31:35 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:55.737 06:31:35 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:55.737 06:31:35 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:55.737 06:31:35 -- accel/accel.sh@59 -- # spdk_tgt_pid=67937 00:06:55.737 06:31:35 -- accel/accel.sh@60 -- # waitforlisten 67937 00:06:55.737 06:31:35 -- common/autotest_common.sh@819 -- # '[' -z 67937 ']' 00:06:55.737 06:31:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.737 06:31:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:55.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.737 06:31:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.737 06:31:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:55.737 06:31:35 -- common/autotest_common.sh@10 -- # set +x 00:06:55.737 06:31:35 -- accel/accel.sh@58 -- # build_accel_config 00:06:55.737 06:31:35 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:55.737 06:31:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.737 06:31:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.737 06:31:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.737 06:31:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.737 06:31:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.737 06:31:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.737 06:31:35 -- accel/accel.sh@42 -- # jq -r . 00:06:55.737 [2024-07-12 06:31:35.566057] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:55.737 [2024-07-12 06:31:35.566150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67937 ] 00:06:55.995 [2024-07-12 06:31:35.707332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.995 [2024-07-12 06:31:35.745464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:55.995 [2024-07-12 06:31:35.745658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.930 06:31:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:56.930 06:31:36 -- common/autotest_common.sh@852 -- # return 0 00:06:56.930 06:31:36 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:56.930 06:31:36 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:56.930 06:31:36 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:56.930 06:31:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:56.930 06:31:36 -- common/autotest_common.sh@10 -- # set +x 00:06:56.930 06:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # IFS== 00:06:56.930 06:31:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.930 06:31:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.930 06:31:36 -- accel/accel.sh@67 -- # killprocess 67937 00:06:56.930 06:31:36 -- common/autotest_common.sh@926 -- # '[' -z 67937 ']' 00:06:56.930 06:31:36 -- common/autotest_common.sh@930 -- # kill -0 67937 00:06:56.930 06:31:36 -- common/autotest_common.sh@931 -- # uname 00:06:56.930 06:31:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:56.930 06:31:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67937 00:06:56.930 killing process with pid 67937 00:06:56.930 06:31:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:56.930 06:31:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:56.930 06:31:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67937' 00:06:56.930 06:31:36 -- common/autotest_common.sh@945 -- # kill 67937 00:06:56.930 06:31:36 -- common/autotest_common.sh@950 -- # wait 67937 00:06:57.189 06:31:36 -- accel/accel.sh@68 -- # trap - ERR 00:06:57.189 06:31:36 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:57.189 06:31:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:57.189 06:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.189 06:31:36 -- common/autotest_common.sh@10 -- # set +x 00:06:57.189 06:31:36 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:57.189 06:31:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:57.189 06:31:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.189 06:31:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.189 06:31:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.189 06:31:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.189 06:31:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.189 06:31:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.189 06:31:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.189 06:31:36 -- accel/accel.sh@42 -- # jq -r . 00:06:57.189 06:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.189 06:31:36 -- common/autotest_common.sh@10 -- # set +x 00:06:57.189 06:31:36 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:57.189 06:31:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:57.189 06:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.189 06:31:36 -- common/autotest_common.sh@10 -- # set +x 00:06:57.189 ************************************ 00:06:57.189 START TEST accel_missing_filename 00:06:57.189 ************************************ 00:06:57.189 06:31:36 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:57.189 06:31:36 -- common/autotest_common.sh@640 -- # local es=0 00:06:57.189 06:31:36 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:57.189 06:31:36 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:57.189 06:31:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.189 06:31:36 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:57.189 06:31:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.189 06:31:36 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:57.189 06:31:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:57.189 06:31:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.189 06:31:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.189 06:31:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.189 06:31:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.189 06:31:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.189 06:31:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.189 06:31:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.189 06:31:36 -- accel/accel.sh@42 -- # jq -r . 00:06:57.189 [2024-07-12 06:31:36.992112] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:57.189 [2024-07-12 06:31:36.992540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67989 ] 00:06:57.448 [2024-07-12 06:31:37.131789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.448 [2024-07-12 06:31:37.165752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.448 [2024-07-12 06:31:37.195412] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.448 [2024-07-12 06:31:37.234511] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:57.448 A filename is required. 00:06:57.448 06:31:37 -- common/autotest_common.sh@643 -- # es=234 00:06:57.448 06:31:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:57.448 06:31:37 -- common/autotest_common.sh@652 -- # es=106 00:06:57.448 06:31:37 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:57.448 06:31:37 -- common/autotest_common.sh@660 -- # es=1 00:06:57.448 06:31:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:57.448 00:06:57.448 real 0m0.324s 00:06:57.448 user 0m0.192s 00:06:57.448 sys 0m0.073s 00:06:57.448 06:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.448 ************************************ 00:06:57.448 END TEST accel_missing_filename 00:06:57.448 ************************************ 00:06:57.448 06:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:57.448 06:31:37 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:57.448 06:31:37 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:57.448 06:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.448 06:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:57.448 ************************************ 00:06:57.448 START TEST accel_compress_verify 00:06:57.448 ************************************ 00:06:57.448 06:31:37 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:57.448 06:31:37 -- common/autotest_common.sh@640 -- # local es=0 00:06:57.448 06:31:37 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:57.448 06:31:37 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:57.448 06:31:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.448 06:31:37 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:57.448 06:31:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.448 06:31:37 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:57.448 06:31:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:57.448 06:31:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.448 06:31:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.448 06:31:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.448 06:31:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.448 06:31:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.448 06:31:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.448 06:31:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.448 06:31:37 -- accel/accel.sh@42 -- # jq -r . 00:06:57.448 [2024-07-12 06:31:37.352868] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:57.448 [2024-07-12 06:31:37.353043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68013 ] 00:06:57.707 [2024-07-12 06:31:37.482411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.707 [2024-07-12 06:31:37.518761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.707 [2024-07-12 06:31:37.548088] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.707 [2024-07-12 06:31:37.586463] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:57.966 00:06:57.966 Compression does not support the verify option, aborting. 00:06:57.966 06:31:37 -- common/autotest_common.sh@643 -- # es=161 00:06:57.966 06:31:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:57.966 06:31:37 -- common/autotest_common.sh@652 -- # es=33 00:06:57.966 ************************************ 00:06:57.966 END TEST accel_compress_verify 00:06:57.966 ************************************ 00:06:57.966 06:31:37 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:57.966 06:31:37 -- common/autotest_common.sh@660 -- # es=1 00:06:57.966 06:31:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:57.966 00:06:57.966 real 0m0.312s 00:06:57.966 user 0m0.187s 00:06:57.966 sys 0m0.072s 00:06:57.966 06:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.966 06:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:57.966 06:31:37 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:57.966 06:31:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:57.966 06:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.966 06:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:57.966 ************************************ 00:06:57.966 START TEST accel_wrong_workload 00:06:57.966 ************************************ 00:06:57.966 06:31:37 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:57.966 06:31:37 -- common/autotest_common.sh@640 -- # local es=0 00:06:57.966 06:31:37 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:57.966 06:31:37 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:57.966 06:31:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.966 06:31:37 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:57.966 06:31:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.966 06:31:37 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:57.966 06:31:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:57.966 06:31:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.966 06:31:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.966 06:31:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.966 06:31:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.966 06:31:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.966 06:31:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.966 06:31:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.966 06:31:37 -- accel/accel.sh@42 -- # jq -r . 00:06:57.966 Unsupported workload type: foobar 00:06:57.966 [2024-07-12 06:31:37.709487] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:57.966 accel_perf options: 00:06:57.966 [-h help message] 00:06:57.966 [-q queue depth per core] 00:06:57.966 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:57.966 [-T number of threads per core 00:06:57.966 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:57.966 [-t time in seconds] 00:06:57.966 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:57.966 [ dif_verify, , dif_generate, dif_generate_copy 00:06:57.966 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:57.966 [-l for compress/decompress workloads, name of uncompressed input file 00:06:57.966 [-S for crc32c workload, use this seed value (default 0) 00:06:57.966 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:57.966 [-f for fill workload, use this BYTE value (default 255) 00:06:57.966 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:57.966 [-y verify result if this switch is on] 00:06:57.966 [-a tasks to allocate per core (default: same value as -q)] 00:06:57.966 Can be used to spread operations across a wider range of memory. 00:06:57.966 06:31:37 -- common/autotest_common.sh@643 -- # es=1 00:06:57.966 06:31:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:57.966 06:31:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:57.966 06:31:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:57.966 00:06:57.966 real 0m0.032s 00:06:57.966 user 0m0.019s 00:06:57.966 sys 0m0.012s 00:06:57.966 06:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.966 ************************************ 00:06:57.966 END TEST accel_wrong_workload 00:06:57.966 ************************************ 00:06:57.966 06:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:57.966 06:31:37 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:57.966 06:31:37 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:57.966 06:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.966 06:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:57.966 ************************************ 00:06:57.966 START TEST accel_negative_buffers 00:06:57.966 ************************************ 00:06:57.966 06:31:37 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:57.966 06:31:37 -- common/autotest_common.sh@640 -- # local es=0 00:06:57.966 06:31:37 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:57.966 06:31:37 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:57.966 06:31:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.966 06:31:37 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:57.967 06:31:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.967 06:31:37 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:57.967 06:31:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:57.967 06:31:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.967 06:31:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.967 06:31:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.967 06:31:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.967 06:31:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.967 06:31:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.967 06:31:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.967 06:31:37 -- accel/accel.sh@42 -- # jq -r . 00:06:57.967 -x option must be non-negative. 00:06:57.967 [2024-07-12 06:31:37.787181] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:57.967 accel_perf options: 00:06:57.967 [-h help message] 00:06:57.967 [-q queue depth per core] 00:06:57.967 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:57.967 [-T number of threads per core 00:06:57.967 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:57.967 [-t time in seconds] 00:06:57.967 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:57.967 [ dif_verify, , dif_generate, dif_generate_copy 00:06:57.967 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:57.967 [-l for compress/decompress workloads, name of uncompressed input file 00:06:57.967 [-S for crc32c workload, use this seed value (default 0) 00:06:57.967 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:57.967 [-f for fill workload, use this BYTE value (default 255) 00:06:57.967 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:57.967 [-y verify result if this switch is on] 00:06:57.967 [-a tasks to allocate per core (default: same value as -q)] 00:06:57.967 Can be used to spread operations across a wider range of memory. 00:06:57.967 06:31:37 -- common/autotest_common.sh@643 -- # es=1 00:06:57.967 06:31:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:57.967 06:31:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:57.967 06:31:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:57.967 00:06:57.967 real 0m0.027s 00:06:57.967 user 0m0.016s 00:06:57.967 sys 0m0.011s 00:06:57.967 06:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.967 ************************************ 00:06:57.967 END TEST accel_negative_buffers 00:06:57.967 06:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:57.967 ************************************ 00:06:57.967 06:31:37 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:57.967 06:31:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:57.967 06:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.967 06:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:57.967 ************************************ 00:06:57.967 START TEST accel_crc32c 00:06:57.967 ************************************ 00:06:57.967 06:31:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:57.967 06:31:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.967 06:31:37 -- accel/accel.sh@17 -- # local accel_module 00:06:57.967 06:31:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:57.967 06:31:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:57.967 06:31:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.967 06:31:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.967 06:31:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.967 06:31:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.967 06:31:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.967 06:31:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.967 06:31:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.967 06:31:37 -- accel/accel.sh@42 -- # jq -r . 00:06:57.967 [2024-07-12 06:31:37.852980] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:57.967 [2024-07-12 06:31:37.853082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68072 ] 00:06:58.226 [2024-07-12 06:31:37.985249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.226 [2024-07-12 06:31:38.021233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.602 06:31:39 -- accel/accel.sh@18 -- # out=' 00:06:59.602 SPDK Configuration: 00:06:59.602 Core mask: 0x1 00:06:59.602 00:06:59.602 Accel Perf Configuration: 00:06:59.602 Workload Type: crc32c 00:06:59.602 CRC-32C seed: 32 00:06:59.602 Transfer size: 4096 bytes 00:06:59.602 Vector count 1 00:06:59.602 Module: software 00:06:59.602 Queue depth: 32 00:06:59.602 Allocate depth: 32 00:06:59.602 # threads/core: 1 00:06:59.602 Run time: 1 seconds 00:06:59.602 Verify: Yes 00:06:59.602 00:06:59.602 Running for 1 seconds... 00:06:59.602 00:06:59.602 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.602 ------------------------------------------------------------------------------------ 00:06:59.602 0,0 414176/s 1617 MiB/s 0 0 00:06:59.602 ==================================================================================== 00:06:59.602 Total 414176/s 1617 MiB/s 0 0' 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:59.602 06:31:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.602 06:31:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:59.602 06:31:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.602 06:31:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.602 06:31:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.602 06:31:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.602 06:31:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.602 06:31:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.602 06:31:39 -- accel/accel.sh@42 -- # jq -r . 00:06:59.602 [2024-07-12 06:31:39.167478] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:59.602 [2024-07-12 06:31:39.167563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68091 ] 00:06:59.602 [2024-07-12 06:31:39.305253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.602 [2024-07-12 06:31:39.340553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val=0x1 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val=crc32c 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val=32 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val=software 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val=32 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val=32 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val=1 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val=Yes 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.602 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:59.602 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:59.602 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.603 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:59.603 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:07:00.980 06:31:40 -- accel/accel.sh@21 -- # val= 00:07:00.980 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:07:00.980 06:31:40 -- accel/accel.sh@21 -- # val= 00:07:00.980 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:07:00.980 06:31:40 -- accel/accel.sh@21 -- # val= 00:07:00.980 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:07:00.980 06:31:40 -- accel/accel.sh@21 -- # val= 00:07:00.980 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:07:00.980 06:31:40 -- accel/accel.sh@21 -- # val= 00:07:00.980 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:07:00.980 06:31:40 -- accel/accel.sh@21 -- # val= 00:07:00.980 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:07:00.980 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:07:00.980 06:31:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.980 06:31:40 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:00.980 06:31:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.980 00:07:00.980 real 0m2.639s 00:07:00.980 user 0m2.304s 00:07:00.980 sys 0m0.132s 00:07:00.981 06:31:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.981 06:31:40 -- common/autotest_common.sh@10 -- # set +x 00:07:00.981 ************************************ 00:07:00.981 END TEST accel_crc32c 00:07:00.981 ************************************ 00:07:00.981 06:31:40 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:00.981 06:31:40 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:00.981 06:31:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.981 06:31:40 -- common/autotest_common.sh@10 -- # set +x 00:07:00.981 ************************************ 00:07:00.981 START TEST accel_crc32c_C2 00:07:00.981 ************************************ 00:07:00.981 06:31:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:00.981 06:31:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.981 06:31:40 -- accel/accel.sh@17 -- # local accel_module 00:07:00.981 06:31:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:00.981 06:31:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.981 06:31:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:00.981 06:31:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.981 06:31:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.981 06:31:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.981 06:31:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.981 06:31:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.981 06:31:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.981 06:31:40 -- accel/accel.sh@42 -- # jq -r . 00:07:00.981 [2024-07-12 06:31:40.534891] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:00.981 [2024-07-12 06:31:40.535008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68120 ] 00:07:00.981 [2024-07-12 06:31:40.665264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.981 [2024-07-12 06:31:40.700191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.915 06:31:41 -- accel/accel.sh@18 -- # out=' 00:07:01.915 SPDK Configuration: 00:07:01.915 Core mask: 0x1 00:07:01.915 00:07:01.915 Accel Perf Configuration: 00:07:01.915 Workload Type: crc32c 00:07:01.915 CRC-32C seed: 0 00:07:01.915 Transfer size: 4096 bytes 00:07:01.915 Vector count 2 00:07:01.915 Module: software 00:07:01.915 Queue depth: 32 00:07:01.915 Allocate depth: 32 00:07:01.915 # threads/core: 1 00:07:01.915 Run time: 1 seconds 00:07:01.915 Verify: Yes 00:07:01.915 00:07:01.915 Running for 1 seconds... 00:07:01.915 00:07:01.915 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.915 ------------------------------------------------------------------------------------ 00:07:01.915 0,0 332704/s 2599 MiB/s 0 0 00:07:01.915 ==================================================================================== 00:07:01.915 Total 332704/s 1299 MiB/s 0 0' 00:07:01.915 06:31:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:01.915 06:31:41 -- accel/accel.sh@20 -- # IFS=: 00:07:01.915 06:31:41 -- accel/accel.sh@20 -- # read -r var val 00:07:01.915 06:31:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:01.915 06:31:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.915 06:31:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.915 06:31:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.915 06:31:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.915 06:31:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.915 06:31:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.915 06:31:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.915 06:31:41 -- accel/accel.sh@42 -- # jq -r . 00:07:02.174 [2024-07-12 06:31:41.848888] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:02.174 [2024-07-12 06:31:41.848995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68140 ] 00:07:02.174 [2024-07-12 06:31:41.986544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.174 [2024-07-12 06:31:42.027534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val= 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val= 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val=0x1 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val= 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val= 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val=crc32c 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val=0 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val= 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val=software 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val=32 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val=32 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val=1 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val=Yes 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val= 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:02.174 06:31:42 -- accel/accel.sh@21 -- # val= 00:07:02.174 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:07:02.174 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:07:03.550 06:31:43 -- accel/accel.sh@21 -- # val= 00:07:03.550 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:07:03.550 06:31:43 -- accel/accel.sh@21 -- # val= 00:07:03.550 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:07:03.550 06:31:43 -- accel/accel.sh@21 -- # val= 00:07:03.550 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:07:03.550 06:31:43 -- accel/accel.sh@21 -- # val= 00:07:03.550 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:07:03.550 06:31:43 -- accel/accel.sh@21 -- # val= 00:07:03.550 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:07:03.550 06:31:43 -- accel/accel.sh@21 -- # val= 00:07:03.550 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:07:03.550 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:07:03.550 06:31:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.550 06:31:43 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:03.550 06:31:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.550 00:07:03.550 real 0m2.650s 00:07:03.550 user 0m2.299s 00:07:03.550 sys 0m0.148s 00:07:03.550 06:31:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.550 06:31:43 -- common/autotest_common.sh@10 -- # set +x 00:07:03.550 ************************************ 00:07:03.550 END TEST accel_crc32c_C2 00:07:03.550 ************************************ 00:07:03.550 06:31:43 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:03.550 06:31:43 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:03.550 06:31:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.550 06:31:43 -- common/autotest_common.sh@10 -- # set +x 00:07:03.550 ************************************ 00:07:03.550 START TEST accel_copy 00:07:03.550 ************************************ 00:07:03.551 06:31:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:03.551 06:31:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.551 06:31:43 -- accel/accel.sh@17 -- # local accel_module 00:07:03.551 06:31:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:03.551 06:31:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:03.551 06:31:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.551 06:31:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.551 06:31:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.551 06:31:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.551 06:31:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.551 06:31:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.551 06:31:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.551 06:31:43 -- accel/accel.sh@42 -- # jq -r . 00:07:03.551 [2024-07-12 06:31:43.225042] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:03.551 [2024-07-12 06:31:43.225134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68171 ] 00:07:03.551 [2024-07-12 06:31:43.359488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.551 [2024-07-12 06:31:43.404243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.960 06:31:44 -- accel/accel.sh@18 -- # out=' 00:07:04.960 SPDK Configuration: 00:07:04.960 Core mask: 0x1 00:07:04.960 00:07:04.960 Accel Perf Configuration: 00:07:04.960 Workload Type: copy 00:07:04.960 Transfer size: 4096 bytes 00:07:04.960 Vector count 1 00:07:04.960 Module: software 00:07:04.960 Queue depth: 32 00:07:04.960 Allocate depth: 32 00:07:04.960 # threads/core: 1 00:07:04.960 Run time: 1 seconds 00:07:04.960 Verify: Yes 00:07:04.960 00:07:04.960 Running for 1 seconds... 00:07:04.960 00:07:04.960 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.960 ------------------------------------------------------------------------------------ 00:07:04.960 0,0 278816/s 1089 MiB/s 0 0 00:07:04.960 ==================================================================================== 00:07:04.960 Total 278816/s 1089 MiB/s 0 0' 00:07:04.960 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.960 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.960 06:31:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:04.960 06:31:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:04.960 06:31:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.960 06:31:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.960 06:31:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.960 06:31:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.960 06:31:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.960 06:31:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.960 06:31:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.960 06:31:44 -- accel/accel.sh@42 -- # jq -r . 00:07:04.960 [2024-07-12 06:31:44.559826] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:04.960 [2024-07-12 06:31:44.559935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68190 ] 00:07:04.960 [2024-07-12 06:31:44.697796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.960 [2024-07-12 06:31:44.737413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.960 06:31:44 -- accel/accel.sh@21 -- # val= 00:07:04.960 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.960 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.960 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.960 06:31:44 -- accel/accel.sh@21 -- # val= 00:07:04.960 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.960 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.960 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val=0x1 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val= 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val= 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val=copy 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val= 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val=software 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val=32 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val=32 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val=1 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val=Yes 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val= 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:04.961 06:31:44 -- accel/accel.sh@21 -- # val= 00:07:04.961 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:07:04.961 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.337 06:31:45 -- accel/accel.sh@21 -- # val= 00:07:06.337 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:07:06.337 06:31:45 -- accel/accel.sh@21 -- # val= 00:07:06.337 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:07:06.337 06:31:45 -- accel/accel.sh@21 -- # val= 00:07:06.337 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:07:06.337 06:31:45 -- accel/accel.sh@21 -- # val= 00:07:06.337 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:07:06.337 06:31:45 -- accel/accel.sh@21 -- # val= 00:07:06.337 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:07:06.337 06:31:45 -- accel/accel.sh@21 -- # val= 00:07:06.337 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:07:06.337 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:07:06.337 06:31:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.337 06:31:45 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:06.337 06:31:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.337 00:07:06.337 real 0m2.680s 00:07:06.337 user 0m2.330s 00:07:06.337 sys 0m0.145s 00:07:06.337 06:31:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.337 ************************************ 00:07:06.337 END TEST accel_copy 00:07:06.337 ************************************ 00:07:06.337 06:31:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.337 06:31:45 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.337 06:31:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:06.337 06:31:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.337 06:31:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.337 ************************************ 00:07:06.337 START TEST accel_fill 00:07:06.337 ************************************ 00:07:06.337 06:31:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.337 06:31:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.337 06:31:45 -- accel/accel.sh@17 -- # local accel_module 00:07:06.337 06:31:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.337 06:31:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.337 06:31:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.337 06:31:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.337 06:31:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.337 06:31:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.337 06:31:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.337 06:31:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.337 06:31:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.337 06:31:45 -- accel/accel.sh@42 -- # jq -r . 00:07:06.337 [2024-07-12 06:31:45.945231] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:06.337 [2024-07-12 06:31:45.945356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68225 ] 00:07:06.337 [2024-07-12 06:31:46.081431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.337 [2024-07-12 06:31:46.124394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.735 06:31:47 -- accel/accel.sh@18 -- # out=' 00:07:07.735 SPDK Configuration: 00:07:07.735 Core mask: 0x1 00:07:07.735 00:07:07.735 Accel Perf Configuration: 00:07:07.735 Workload Type: fill 00:07:07.735 Fill pattern: 0x80 00:07:07.735 Transfer size: 4096 bytes 00:07:07.735 Vector count 1 00:07:07.735 Module: software 00:07:07.735 Queue depth: 64 00:07:07.735 Allocate depth: 64 00:07:07.735 # threads/core: 1 00:07:07.735 Run time: 1 seconds 00:07:07.735 Verify: Yes 00:07:07.735 00:07:07.735 Running for 1 seconds... 00:07:07.735 00:07:07.735 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.735 ------------------------------------------------------------------------------------ 00:07:07.735 0,0 435264/s 1700 MiB/s 0 0 00:07:07.735 ==================================================================================== 00:07:07.735 Total 435264/s 1700 MiB/s 0 0' 00:07:07.735 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.735 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.735 06:31:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.735 06:31:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.735 06:31:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.735 06:31:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.735 06:31:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.735 06:31:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.735 06:31:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.735 06:31:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.735 06:31:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.735 06:31:47 -- accel/accel.sh@42 -- # jq -r . 00:07:07.735 [2024-07-12 06:31:47.294791] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:07.736 [2024-07-12 06:31:47.294947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68239 ] 00:07:07.736 [2024-07-12 06:31:47.436543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.736 [2024-07-12 06:31:47.472899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val= 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val= 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val=0x1 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val= 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val= 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val=fill 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val=0x80 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val= 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val=software 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val=64 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val=64 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val=1 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val=Yes 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val= 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:07.736 06:31:47 -- accel/accel.sh@21 -- # val= 00:07:07.736 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:07:07.736 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.112 06:31:48 -- accel/accel.sh@21 -- # val= 00:07:09.112 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:07:09.112 06:31:48 -- accel/accel.sh@21 -- # val= 00:07:09.112 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:07:09.112 06:31:48 -- accel/accel.sh@21 -- # val= 00:07:09.112 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:07:09.112 06:31:48 -- accel/accel.sh@21 -- # val= 00:07:09.112 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:07:09.112 06:31:48 -- accel/accel.sh@21 -- # val= 00:07:09.112 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:07:09.112 06:31:48 -- accel/accel.sh@21 -- # val= 00:07:09.112 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:07:09.112 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:07:09.112 06:31:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.112 06:31:48 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:09.112 06:31:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.112 00:07:09.112 real 0m2.690s 00:07:09.112 user 0m2.314s 00:07:09.112 sys 0m0.168s 00:07:09.112 06:31:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.112 06:31:48 -- common/autotest_common.sh@10 -- # set +x 00:07:09.112 ************************************ 00:07:09.112 END TEST accel_fill 00:07:09.112 ************************************ 00:07:09.112 06:31:48 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:09.112 06:31:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:09.112 06:31:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.112 06:31:48 -- common/autotest_common.sh@10 -- # set +x 00:07:09.112 ************************************ 00:07:09.112 START TEST accel_copy_crc32c 00:07:09.112 ************************************ 00:07:09.112 06:31:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:09.112 06:31:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.112 06:31:48 -- accel/accel.sh@17 -- # local accel_module 00:07:09.112 06:31:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:09.112 06:31:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:09.112 06:31:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.112 06:31:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.112 06:31:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.112 06:31:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.112 06:31:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.112 06:31:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.112 06:31:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.112 06:31:48 -- accel/accel.sh@42 -- # jq -r . 00:07:09.112 [2024-07-12 06:31:48.677644] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:09.112 [2024-07-12 06:31:48.677776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68273 ] 00:07:09.112 [2024-07-12 06:31:48.817228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.112 [2024-07-12 06:31:48.858719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.488 06:31:50 -- accel/accel.sh@18 -- # out=' 00:07:10.488 SPDK Configuration: 00:07:10.488 Core mask: 0x1 00:07:10.488 00:07:10.488 Accel Perf Configuration: 00:07:10.488 Workload Type: copy_crc32c 00:07:10.488 CRC-32C seed: 0 00:07:10.488 Vector size: 4096 bytes 00:07:10.488 Transfer size: 4096 bytes 00:07:10.488 Vector count 1 00:07:10.488 Module: software 00:07:10.488 Queue depth: 32 00:07:10.488 Allocate depth: 32 00:07:10.488 # threads/core: 1 00:07:10.489 Run time: 1 seconds 00:07:10.489 Verify: Yes 00:07:10.489 00:07:10.489 Running for 1 seconds... 00:07:10.489 00:07:10.489 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.489 ------------------------------------------------------------------------------------ 00:07:10.489 0,0 212576/s 830 MiB/s 0 0 00:07:10.489 ==================================================================================== 00:07:10.489 Total 212576/s 830 MiB/s 0 0' 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:10.489 06:31:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:10.489 06:31:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.489 06:31:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.489 06:31:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.489 06:31:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.489 06:31:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.489 06:31:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.489 06:31:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.489 06:31:50 -- accel/accel.sh@42 -- # jq -r . 00:07:10.489 [2024-07-12 06:31:50.026554] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:10.489 [2024-07-12 06:31:50.026708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68293 ] 00:07:10.489 [2024-07-12 06:31:50.166506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.489 [2024-07-12 06:31:50.210280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val= 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val= 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val=0x1 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val= 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val= 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val=0 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val= 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val=software 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val=32 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val=32 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val=1 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val=Yes 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val= 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:10.489 06:31:50 -- accel/accel.sh@21 -- # val= 00:07:10.489 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:07:10.489 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:07:11.864 06:31:51 -- accel/accel.sh@21 -- # val= 00:07:11.864 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.864 06:31:51 -- accel/accel.sh@21 -- # val= 00:07:11.864 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.864 06:31:51 -- accel/accel.sh@21 -- # val= 00:07:11.864 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.864 06:31:51 -- accel/accel.sh@21 -- # val= 00:07:11.864 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.864 06:31:51 -- accel/accel.sh@21 -- # val= 00:07:11.864 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.864 06:31:51 -- accel/accel.sh@21 -- # val= 00:07:11.864 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:07:11.864 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:07:11.864 06:31:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.864 06:31:51 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:11.864 06:31:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.864 00:07:11.864 real 0m2.706s 00:07:11.864 user 0m2.309s 00:07:11.864 sys 0m0.183s 00:07:11.864 06:31:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.864 ************************************ 00:07:11.864 END TEST accel_copy_crc32c 00:07:11.864 ************************************ 00:07:11.864 06:31:51 -- common/autotest_common.sh@10 -- # set +x 00:07:11.864 06:31:51 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:11.864 06:31:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:11.864 06:31:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.864 06:31:51 -- common/autotest_common.sh@10 -- # set +x 00:07:11.864 ************************************ 00:07:11.864 START TEST accel_copy_crc32c_C2 00:07:11.864 ************************************ 00:07:11.864 06:31:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:11.864 06:31:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.864 06:31:51 -- accel/accel.sh@17 -- # local accel_module 00:07:11.864 06:31:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:11.864 06:31:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:11.864 06:31:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.864 06:31:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.864 06:31:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.864 06:31:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.864 06:31:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.864 06:31:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.864 06:31:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.864 06:31:51 -- accel/accel.sh@42 -- # jq -r . 00:07:11.864 [2024-07-12 06:31:51.420932] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:11.864 [2024-07-12 06:31:51.421053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68322 ] 00:07:11.864 [2024-07-12 06:31:51.560752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.864 [2024-07-12 06:31:51.604414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.240 06:31:52 -- accel/accel.sh@18 -- # out=' 00:07:13.240 SPDK Configuration: 00:07:13.240 Core mask: 0x1 00:07:13.240 00:07:13.240 Accel Perf Configuration: 00:07:13.240 Workload Type: copy_crc32c 00:07:13.240 CRC-32C seed: 0 00:07:13.240 Vector size: 4096 bytes 00:07:13.240 Transfer size: 8192 bytes 00:07:13.240 Vector count 2 00:07:13.240 Module: software 00:07:13.240 Queue depth: 32 00:07:13.240 Allocate depth: 32 00:07:13.240 # threads/core: 1 00:07:13.240 Run time: 1 seconds 00:07:13.240 Verify: Yes 00:07:13.240 00:07:13.240 Running for 1 seconds... 00:07:13.240 00:07:13.240 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.240 ------------------------------------------------------------------------------------ 00:07:13.240 0,0 156896/s 1225 MiB/s 0 0 00:07:13.240 ==================================================================================== 00:07:13.240 Total 156896/s 612 MiB/s 0 0' 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:13.240 06:31:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.240 06:31:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.240 06:31:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.240 06:31:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.240 06:31:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.240 06:31:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.240 06:31:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.240 06:31:52 -- accel/accel.sh@42 -- # jq -r . 00:07:13.240 [2024-07-12 06:31:52.769215] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:13.240 [2024-07-12 06:31:52.769347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68340 ] 00:07:13.240 [2024-07-12 06:31:52.912685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.240 [2024-07-12 06:31:52.951010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val= 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val= 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val=0x1 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val= 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val= 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val=0 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val= 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val=software 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val=32 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val=32 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val=1 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val=Yes 00:07:13.240 06:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:52 -- accel/accel.sh@21 -- # val= 00:07:13.240 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:07:13.240 06:31:53 -- accel/accel.sh@21 -- # val= 00:07:13.240 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.240 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:07:13.240 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:07:14.616 06:31:54 -- accel/accel.sh@21 -- # val= 00:07:14.616 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:07:14.616 06:31:54 -- accel/accel.sh@21 -- # val= 00:07:14.616 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:07:14.616 06:31:54 -- accel/accel.sh@21 -- # val= 00:07:14.616 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:07:14.616 06:31:54 -- accel/accel.sh@21 -- # val= 00:07:14.616 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:07:14.616 06:31:54 -- accel/accel.sh@21 -- # val= 00:07:14.616 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:07:14.616 06:31:54 -- accel/accel.sh@21 -- # val= 00:07:14.616 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:07:14.616 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:07:14.616 06:31:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.616 06:31:54 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:14.616 06:31:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.616 00:07:14.616 real 0m2.714s 00:07:14.616 user 0m2.333s 00:07:14.616 sys 0m0.167s 00:07:14.616 06:31:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.616 ************************************ 00:07:14.616 END TEST accel_copy_crc32c_C2 00:07:14.616 ************************************ 00:07:14.616 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:07:14.616 06:31:54 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:14.616 06:31:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:14.616 06:31:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.616 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:07:14.616 ************************************ 00:07:14.616 START TEST accel_dualcast 00:07:14.616 ************************************ 00:07:14.616 06:31:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:14.616 06:31:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.616 06:31:54 -- accel/accel.sh@17 -- # local accel_module 00:07:14.616 06:31:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:14.616 06:31:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:14.616 06:31:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.616 06:31:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.616 06:31:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.616 06:31:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.616 06:31:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.616 06:31:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.616 06:31:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.616 06:31:54 -- accel/accel.sh@42 -- # jq -r . 00:07:14.616 [2024-07-12 06:31:54.184850] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:14.616 [2024-07-12 06:31:54.185012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68376 ] 00:07:14.616 [2024-07-12 06:31:54.323802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.616 [2024-07-12 06:31:54.367840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.991 06:31:55 -- accel/accel.sh@18 -- # out=' 00:07:15.991 SPDK Configuration: 00:07:15.991 Core mask: 0x1 00:07:15.991 00:07:15.991 Accel Perf Configuration: 00:07:15.991 Workload Type: dualcast 00:07:15.991 Transfer size: 4096 bytes 00:07:15.991 Vector count 1 00:07:15.991 Module: software 00:07:15.991 Queue depth: 32 00:07:15.991 Allocate depth: 32 00:07:15.991 # threads/core: 1 00:07:15.991 Run time: 1 seconds 00:07:15.991 Verify: Yes 00:07:15.991 00:07:15.991 Running for 1 seconds... 00:07:15.991 00:07:15.991 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.991 ------------------------------------------------------------------------------------ 00:07:15.991 0,0 308960/s 1206 MiB/s 0 0 00:07:15.991 ==================================================================================== 00:07:15.991 Total 308960/s 1206 MiB/s 0 0' 00:07:15.991 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.991 06:31:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:15.991 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.991 06:31:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:15.991 06:31:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.991 06:31:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.991 06:31:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.991 06:31:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.991 06:31:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.991 06:31:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.991 06:31:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.991 06:31:55 -- accel/accel.sh@42 -- # jq -r . 00:07:15.991 [2024-07-12 06:31:55.533550] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:15.991 [2024-07-12 06:31:55.533678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68390 ] 00:07:15.991 [2024-07-12 06:31:55.673210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.991 [2024-07-12 06:31:55.715410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.991 06:31:55 -- accel/accel.sh@21 -- # val= 00:07:15.991 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.991 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.991 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val= 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val=0x1 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val= 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val= 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val=dualcast 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val= 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val=software 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val=32 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val=32 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val=1 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val=Yes 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val= 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:15.992 06:31:55 -- accel/accel.sh@21 -- # val= 00:07:15.992 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:15.992 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:17.366 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:17.366 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:17.366 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:17.366 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:17.366 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:17.366 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:17.366 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:17.366 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:17.366 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:17.366 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:17.366 ************************************ 00:07:17.366 END TEST accel_dualcast 00:07:17.366 ************************************ 00:07:17.366 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:17.366 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:17.366 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:17.366 06:31:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.366 06:31:56 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:17.366 06:31:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.366 00:07:17.366 real 0m2.696s 00:07:17.366 user 0m2.301s 00:07:17.366 sys 0m0.178s 00:07:17.366 06:31:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.366 06:31:56 -- common/autotest_common.sh@10 -- # set +x 00:07:17.366 06:31:56 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:17.366 06:31:56 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:17.366 06:31:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.366 06:31:56 -- common/autotest_common.sh@10 -- # set +x 00:07:17.366 ************************************ 00:07:17.366 START TEST accel_compare 00:07:17.366 ************************************ 00:07:17.366 06:31:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:17.366 06:31:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.366 06:31:56 -- accel/accel.sh@17 -- # local accel_module 00:07:17.366 06:31:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:17.366 06:31:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.366 06:31:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:17.366 06:31:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.366 06:31:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.366 06:31:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.366 06:31:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.366 06:31:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.366 06:31:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.366 06:31:56 -- accel/accel.sh@42 -- # jq -r . 00:07:17.366 [2024-07-12 06:31:56.920037] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:17.366 [2024-07-12 06:31:56.920121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68424 ] 00:07:17.366 [2024-07-12 06:31:57.053250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.366 [2024-07-12 06:31:57.089846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.739 06:31:58 -- accel/accel.sh@18 -- # out=' 00:07:18.739 SPDK Configuration: 00:07:18.739 Core mask: 0x1 00:07:18.739 00:07:18.739 Accel Perf Configuration: 00:07:18.739 Workload Type: compare 00:07:18.739 Transfer size: 4096 bytes 00:07:18.739 Vector count 1 00:07:18.739 Module: software 00:07:18.739 Queue depth: 32 00:07:18.739 Allocate depth: 32 00:07:18.739 # threads/core: 1 00:07:18.739 Run time: 1 seconds 00:07:18.739 Verify: Yes 00:07:18.739 00:07:18.739 Running for 1 seconds... 00:07:18.739 00:07:18.739 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.739 ------------------------------------------------------------------------------------ 00:07:18.739 0,0 420800/s 1643 MiB/s 0 0 00:07:18.739 ==================================================================================== 00:07:18.739 Total 420800/s 1643 MiB/s 0 0' 00:07:18.739 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.739 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.739 06:31:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:18.739 06:31:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.739 06:31:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:18.739 06:31:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.739 06:31:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.739 06:31:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.739 06:31:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.739 06:31:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.739 06:31:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.739 06:31:58 -- accel/accel.sh@42 -- # jq -r . 00:07:18.739 [2024-07-12 06:31:58.251278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:18.739 [2024-07-12 06:31:58.251401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68444 ] 00:07:18.739 [2024-07-12 06:31:58.388889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.739 [2024-07-12 06:31:58.424786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.739 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:18.739 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.739 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.739 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.739 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:18.739 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.739 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.739 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.739 06:31:58 -- accel/accel.sh@21 -- # val=0x1 00:07:18.739 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.739 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.739 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.739 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val=compare 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val=software 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val=32 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val=32 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val=1 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val=Yes 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:18.740 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:18.740 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:18.740 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:19.675 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:19.675 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:19.675 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:19.675 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:19.675 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:19.675 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:19.675 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:19.675 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:19.675 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:19.675 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:19.675 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:19.675 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:19.675 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:19.675 06:31:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.675 06:31:59 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:19.675 06:31:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.675 00:07:19.675 real 0m2.662s 00:07:19.675 user 0m2.299s 00:07:19.675 sys 0m0.154s 00:07:19.675 ************************************ 00:07:19.675 END TEST accel_compare 00:07:19.675 ************************************ 00:07:19.675 06:31:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.675 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:07:19.675 06:31:59 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:19.675 06:31:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:19.675 06:31:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.675 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:07:19.933 ************************************ 00:07:19.933 START TEST accel_xor 00:07:19.933 ************************************ 00:07:19.933 06:31:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:19.933 06:31:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.933 06:31:59 -- accel/accel.sh@17 -- # local accel_module 00:07:19.933 06:31:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:19.933 06:31:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:19.933 06:31:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.933 06:31:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.933 06:31:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.933 06:31:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.933 06:31:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.933 06:31:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.933 06:31:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.933 06:31:59 -- accel/accel.sh@42 -- # jq -r . 00:07:19.933 [2024-07-12 06:31:59.621740] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:19.933 [2024-07-12 06:31:59.621859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68473 ] 00:07:19.933 [2024-07-12 06:31:59.766178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.933 [2024-07-12 06:31:59.800620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.310 06:32:00 -- accel/accel.sh@18 -- # out=' 00:07:21.310 SPDK Configuration: 00:07:21.310 Core mask: 0x1 00:07:21.310 00:07:21.310 Accel Perf Configuration: 00:07:21.310 Workload Type: xor 00:07:21.310 Source buffers: 2 00:07:21.310 Transfer size: 4096 bytes 00:07:21.310 Vector count 1 00:07:21.310 Module: software 00:07:21.310 Queue depth: 32 00:07:21.310 Allocate depth: 32 00:07:21.310 # threads/core: 1 00:07:21.310 Run time: 1 seconds 00:07:21.310 Verify: Yes 00:07:21.310 00:07:21.310 Running for 1 seconds... 00:07:21.310 00:07:21.310 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.310 ------------------------------------------------------------------------------------ 00:07:21.310 0,0 241248/s 942 MiB/s 0 0 00:07:21.310 ==================================================================================== 00:07:21.310 Total 241248/s 942 MiB/s 0 0' 00:07:21.310 06:32:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:21.310 06:32:00 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:00 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.310 06:32:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:21.310 06:32:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.310 06:32:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.310 06:32:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.310 06:32:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.310 06:32:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.310 06:32:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.310 06:32:00 -- accel/accel.sh@42 -- # jq -r . 00:07:21.310 [2024-07-12 06:32:00.952230] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:21.310 [2024-07-12 06:32:00.952329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68487 ] 00:07:21.310 [2024-07-12 06:32:01.096408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.310 [2024-07-12 06:32:01.130443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val=0x1 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val=xor 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val=2 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val=software 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val=32 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val=32 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val=1 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val=Yes 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:21.310 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:21.310 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:21.310 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:22.686 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:22.686 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:22.686 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:22.686 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:22.686 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:22.686 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:22.686 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:22.686 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:22.686 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:22.686 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:22.686 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:22.686 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:22.686 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:22.686 06:32:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.686 06:32:02 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:22.686 06:32:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.686 ************************************ 00:07:22.686 END TEST accel_xor 00:07:22.686 ************************************ 00:07:22.686 00:07:22.686 real 0m2.669s 00:07:22.686 user 0m2.302s 00:07:22.686 sys 0m0.158s 00:07:22.686 06:32:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.686 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:07:22.686 06:32:02 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:22.686 06:32:02 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:22.686 06:32:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.686 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:07:22.686 ************************************ 00:07:22.686 START TEST accel_xor 00:07:22.686 ************************************ 00:07:22.686 06:32:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:22.686 06:32:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.686 06:32:02 -- accel/accel.sh@17 -- # local accel_module 00:07:22.686 06:32:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:22.686 06:32:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:22.686 06:32:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.686 06:32:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.686 06:32:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.686 06:32:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.686 06:32:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.686 06:32:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.686 06:32:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.686 06:32:02 -- accel/accel.sh@42 -- # jq -r . 00:07:22.686 [2024-07-12 06:32:02.328528] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:22.686 [2024-07-12 06:32:02.328649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68527 ] 00:07:22.686 [2024-07-12 06:32:02.477667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.686 [2024-07-12 06:32:02.521557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.060 06:32:03 -- accel/accel.sh@18 -- # out=' 00:07:24.060 SPDK Configuration: 00:07:24.060 Core mask: 0x1 00:07:24.060 00:07:24.060 Accel Perf Configuration: 00:07:24.060 Workload Type: xor 00:07:24.060 Source buffers: 3 00:07:24.060 Transfer size: 4096 bytes 00:07:24.060 Vector count 1 00:07:24.060 Module: software 00:07:24.060 Queue depth: 32 00:07:24.060 Allocate depth: 32 00:07:24.060 # threads/core: 1 00:07:24.060 Run time: 1 seconds 00:07:24.060 Verify: Yes 00:07:24.060 00:07:24.060 Running for 1 seconds... 00:07:24.060 00:07:24.060 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.060 ------------------------------------------------------------------------------------ 00:07:24.060 0,0 203168/s 793 MiB/s 0 0 00:07:24.060 ==================================================================================== 00:07:24.060 Total 203168/s 793 MiB/s 0 0' 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:24.060 06:32:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:24.060 06:32:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.060 06:32:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.060 06:32:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.060 06:32:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.060 06:32:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.060 06:32:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.060 06:32:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.060 06:32:03 -- accel/accel.sh@42 -- # jq -r . 00:07:24.060 [2024-07-12 06:32:03.689800] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:24.060 [2024-07-12 06:32:03.689892] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68541 ] 00:07:24.060 [2024-07-12 06:32:03.827665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.060 [2024-07-12 06:32:03.861815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val= 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val= 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val=0x1 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val= 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val= 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val=xor 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val=3 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val= 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val=software 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val=32 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val=32 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val=1 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val=Yes 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val= 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 06:32:03 -- accel/accel.sh@21 -- # val= 00:07:24.060 06:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:25.487 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:25.487 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.487 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:25.487 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.487 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:25.487 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.487 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:25.487 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.487 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:25.487 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.487 ************************************ 00:07:25.487 END TEST accel_xor 00:07:25.487 ************************************ 00:07:25.487 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:25.487 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:25.487 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:25.487 06:32:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.487 06:32:05 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:25.487 06:32:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.487 00:07:25.487 real 0m2.708s 00:07:25.487 user 0m2.339s 00:07:25.487 sys 0m0.160s 00:07:25.487 06:32:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.487 06:32:05 -- common/autotest_common.sh@10 -- # set +x 00:07:25.487 06:32:05 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:25.487 06:32:05 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:25.487 06:32:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.487 06:32:05 -- common/autotest_common.sh@10 -- # set +x 00:07:25.487 ************************************ 00:07:25.487 START TEST accel_dif_verify 00:07:25.487 ************************************ 00:07:25.487 06:32:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:25.487 06:32:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.487 06:32:05 -- accel/accel.sh@17 -- # local accel_module 00:07:25.487 06:32:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:25.487 06:32:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:25.487 06:32:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.487 06:32:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.487 06:32:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.487 06:32:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.487 06:32:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.487 06:32:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.487 06:32:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.487 06:32:05 -- accel/accel.sh@42 -- # jq -r . 00:07:25.487 [2024-07-12 06:32:05.079440] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:25.487 [2024-07-12 06:32:05.079558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68576 ] 00:07:25.487 [2024-07-12 06:32:05.220180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.487 [2024-07-12 06:32:05.262933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.863 06:32:06 -- accel/accel.sh@18 -- # out=' 00:07:26.863 SPDK Configuration: 00:07:26.863 Core mask: 0x1 00:07:26.863 00:07:26.863 Accel Perf Configuration: 00:07:26.863 Workload Type: dif_verify 00:07:26.863 Vector size: 4096 bytes 00:07:26.863 Transfer size: 4096 bytes 00:07:26.863 Block size: 512 bytes 00:07:26.863 Metadata size: 8 bytes 00:07:26.863 Vector count 1 00:07:26.863 Module: software 00:07:26.863 Queue depth: 32 00:07:26.863 Allocate depth: 32 00:07:26.863 # threads/core: 1 00:07:26.863 Run time: 1 seconds 00:07:26.863 Verify: No 00:07:26.863 00:07:26.863 Running for 1 seconds... 00:07:26.863 00:07:26.863 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.863 ------------------------------------------------------------------------------------ 00:07:26.863 0,0 92704/s 367 MiB/s 0 0 00:07:26.863 ==================================================================================== 00:07:26.863 Total 92704/s 362 MiB/s 0 0' 00:07:26.863 06:32:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:26.863 06:32:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.863 06:32:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.863 06:32:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.863 06:32:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.863 06:32:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.863 06:32:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.863 06:32:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.863 06:32:06 -- accel/accel.sh@42 -- # jq -r . 00:07:26.863 [2024-07-12 06:32:06.428230] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:26.863 [2024-07-12 06:32:06.428365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68590 ] 00:07:26.863 [2024-07-12 06:32:06.568618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.863 [2024-07-12 06:32:06.611078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val=0x1 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val=dif_verify 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val=software 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val=32 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val=32 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val=1 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val=No 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:26.863 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:26.863 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:26.863 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:28.232 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:28.232 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:28.232 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:28.232 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:28.232 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:28.232 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:28.232 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:28.232 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:28.232 ************************************ 00:07:28.232 END TEST accel_dif_verify 00:07:28.232 ************************************ 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:28.232 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:28.232 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:28.232 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:28.232 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:28.232 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:28.232 06:32:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.232 06:32:07 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:28.232 06:32:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.232 00:07:28.232 real 0m2.702s 00:07:28.232 user 0m2.298s 00:07:28.232 sys 0m0.194s 00:07:28.232 06:32:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.232 06:32:07 -- common/autotest_common.sh@10 -- # set +x 00:07:28.232 06:32:07 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:28.232 06:32:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:28.232 06:32:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.232 06:32:07 -- common/autotest_common.sh@10 -- # set +x 00:07:28.232 ************************************ 00:07:28.232 START TEST accel_dif_generate 00:07:28.232 ************************************ 00:07:28.232 06:32:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:28.232 06:32:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.232 06:32:07 -- accel/accel.sh@17 -- # local accel_module 00:07:28.232 06:32:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:28.232 06:32:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:28.232 06:32:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.232 06:32:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.232 06:32:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.232 06:32:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.232 06:32:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.232 06:32:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.232 06:32:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.232 06:32:07 -- accel/accel.sh@42 -- # jq -r . 00:07:28.232 [2024-07-12 06:32:07.820508] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:28.232 [2024-07-12 06:32:07.820627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68624 ] 00:07:28.232 [2024-07-12 06:32:07.957443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.232 [2024-07-12 06:32:07.993152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.603 06:32:09 -- accel/accel.sh@18 -- # out=' 00:07:29.603 SPDK Configuration: 00:07:29.603 Core mask: 0x1 00:07:29.603 00:07:29.603 Accel Perf Configuration: 00:07:29.603 Workload Type: dif_generate 00:07:29.603 Vector size: 4096 bytes 00:07:29.603 Transfer size: 4096 bytes 00:07:29.603 Block size: 512 bytes 00:07:29.603 Metadata size: 8 bytes 00:07:29.603 Vector count 1 00:07:29.603 Module: software 00:07:29.603 Queue depth: 32 00:07:29.603 Allocate depth: 32 00:07:29.603 # threads/core: 1 00:07:29.603 Run time: 1 seconds 00:07:29.603 Verify: No 00:07:29.603 00:07:29.603 Running for 1 seconds... 00:07:29.603 00:07:29.603 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.603 ------------------------------------------------------------------------------------ 00:07:29.603 0,0 114272/s 453 MiB/s 0 0 00:07:29.603 ==================================================================================== 00:07:29.603 Total 114272/s 446 MiB/s 0 0' 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:29.603 06:32:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.603 06:32:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.603 06:32:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.603 06:32:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.603 06:32:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.603 06:32:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.603 06:32:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.603 06:32:09 -- accel/accel.sh@42 -- # jq -r . 00:07:29.603 [2024-07-12 06:32:09.147238] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:29.603 [2024-07-12 06:32:09.147361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68644 ] 00:07:29.603 [2024-07-12 06:32:09.291067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.603 [2024-07-12 06:32:09.329473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val=0x1 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val=dif_generate 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val=software 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val=32 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val=32 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val=1 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val=No 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:29.603 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:29.603 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:29.603 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:30.981 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:30.981 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:30.981 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:30.981 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:30.981 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:30.981 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:30.981 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:30.981 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:30.981 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:30.981 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:30.981 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:30.981 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:30.981 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:30.981 06:32:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.981 06:32:10 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:30.981 06:32:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.981 00:07:30.981 real 0m2.672s 00:07:30.981 user 0m2.308s 00:07:30.981 sys 0m0.158s 00:07:30.981 06:32:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.981 06:32:10 -- common/autotest_common.sh@10 -- # set +x 00:07:30.981 ************************************ 00:07:30.981 END TEST accel_dif_generate 00:07:30.981 ************************************ 00:07:30.981 06:32:10 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:30.981 06:32:10 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:30.981 06:32:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.981 06:32:10 -- common/autotest_common.sh@10 -- # set +x 00:07:30.981 ************************************ 00:07:30.981 START TEST accel_dif_generate_copy 00:07:30.981 ************************************ 00:07:30.981 06:32:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:30.981 06:32:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.981 06:32:10 -- accel/accel.sh@17 -- # local accel_module 00:07:30.981 06:32:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:30.981 06:32:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:30.981 06:32:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.981 06:32:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.981 06:32:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.981 06:32:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.981 06:32:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.981 06:32:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.981 06:32:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.981 06:32:10 -- accel/accel.sh@42 -- # jq -r . 00:07:30.981 [2024-07-12 06:32:10.540532] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:30.981 [2024-07-12 06:32:10.540659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68673 ] 00:07:30.981 [2024-07-12 06:32:10.683141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.981 [2024-07-12 06:32:10.718889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.378 06:32:11 -- accel/accel.sh@18 -- # out=' 00:07:32.378 SPDK Configuration: 00:07:32.378 Core mask: 0x1 00:07:32.378 00:07:32.378 Accel Perf Configuration: 00:07:32.378 Workload Type: dif_generate_copy 00:07:32.378 Vector size: 4096 bytes 00:07:32.378 Transfer size: 4096 bytes 00:07:32.378 Vector count 1 00:07:32.378 Module: software 00:07:32.378 Queue depth: 32 00:07:32.378 Allocate depth: 32 00:07:32.378 # threads/core: 1 00:07:32.378 Run time: 1 seconds 00:07:32.378 Verify: No 00:07:32.378 00:07:32.378 Running for 1 seconds... 00:07:32.378 00:07:32.378 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.378 ------------------------------------------------------------------------------------ 00:07:32.378 0,0 88512/s 351 MiB/s 0 0 00:07:32.378 ==================================================================================== 00:07:32.378 Total 88512/s 345 MiB/s 0 0' 00:07:32.378 06:32:11 -- accel/accel.sh@20 -- # IFS=: 00:07:32.378 06:32:11 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:32.379 06:32:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:32.379 06:32:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.379 06:32:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.379 06:32:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.379 06:32:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.379 06:32:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.379 06:32:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.379 06:32:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.379 06:32:11 -- accel/accel.sh@42 -- # jq -r . 00:07:32.379 [2024-07-12 06:32:11.890381] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:32.379 [2024-07-12 06:32:11.890508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68692 ] 00:07:32.379 [2024-07-12 06:32:12.030684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.379 [2024-07-12 06:32:12.071762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val=0x1 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val=software 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val=32 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val=32 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val=1 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val=No 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:32.379 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:32.379 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:32.379 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:33.328 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:33.328 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:33.328 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:33.328 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:33.328 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:33.328 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:33.328 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:33.328 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:33.328 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:33.328 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:33.328 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:33.328 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:33.328 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:33.328 06:32:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.328 06:32:13 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:33.328 06:32:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.328 00:07:33.328 real 0m2.701s 00:07:33.328 user 0m2.324s 00:07:33.328 sys 0m0.167s 00:07:33.329 06:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.329 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:07:33.329 ************************************ 00:07:33.329 END TEST accel_dif_generate_copy 00:07:33.329 ************************************ 00:07:33.589 06:32:13 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:33.589 06:32:13 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.589 06:32:13 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:33.589 06:32:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.589 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:07:33.589 ************************************ 00:07:33.589 START TEST accel_comp 00:07:33.589 ************************************ 00:07:33.589 06:32:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.589 06:32:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.589 06:32:13 -- accel/accel.sh@17 -- # local accel_module 00:07:33.589 06:32:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.589 06:32:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.589 06:32:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.589 06:32:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.589 06:32:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.589 06:32:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.589 06:32:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.589 06:32:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.589 06:32:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.589 06:32:13 -- accel/accel.sh@42 -- # jq -r . 00:07:33.589 [2024-07-12 06:32:13.282219] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:33.589 [2024-07-12 06:32:13.282309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68727 ] 00:07:33.589 [2024-07-12 06:32:13.422182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.589 [2024-07-12 06:32:13.458494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.964 06:32:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:34.964 00:07:34.964 SPDK Configuration: 00:07:34.964 Core mask: 0x1 00:07:34.964 00:07:34.964 Accel Perf Configuration: 00:07:34.964 Workload Type: compress 00:07:34.964 Transfer size: 4096 bytes 00:07:34.964 Vector count 1 00:07:34.964 Module: software 00:07:34.964 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.964 Queue depth: 32 00:07:34.964 Allocate depth: 32 00:07:34.964 # threads/core: 1 00:07:34.964 Run time: 1 seconds 00:07:34.964 Verify: No 00:07:34.964 00:07:34.964 Running for 1 seconds... 00:07:34.964 00:07:34.964 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.964 ------------------------------------------------------------------------------------ 00:07:34.964 0,0 45760/s 190 MiB/s 0 0 00:07:34.964 ==================================================================================== 00:07:34.964 Total 45760/s 178 MiB/s 0 0' 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.964 06:32:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.964 06:32:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.964 06:32:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.964 06:32:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.964 06:32:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.964 06:32:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.964 06:32:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.964 06:32:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.964 06:32:14 -- accel/accel.sh@42 -- # jq -r . 00:07:34.964 [2024-07-12 06:32:14.618827] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:34.964 [2024-07-12 06:32:14.618984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68741 ] 00:07:34.964 [2024-07-12 06:32:14.757325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.964 [2024-07-12 06:32:14.792824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val=0x1 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val=compress 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val=software 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val=32 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val=32 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val=1 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val=No 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:34.964 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.964 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:34.964 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:34.965 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.965 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:34.965 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:36.339 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:36.339 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:36.339 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:36.339 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:36.339 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:36.339 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:36.339 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:36.339 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:36.339 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:36.339 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:36.339 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:36.339 ************************************ 00:07:36.339 END TEST accel_comp 00:07:36.339 ************************************ 00:07:36.339 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:36.339 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:36.339 06:32:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.339 06:32:15 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:36.339 06:32:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.339 00:07:36.339 real 0m2.671s 00:07:36.339 user 0m2.309s 00:07:36.339 sys 0m0.154s 00:07:36.339 06:32:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.339 06:32:15 -- common/autotest_common.sh@10 -- # set +x 00:07:36.339 06:32:15 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.339 06:32:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:36.339 06:32:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.339 06:32:15 -- common/autotest_common.sh@10 -- # set +x 00:07:36.339 ************************************ 00:07:36.339 START TEST accel_decomp 00:07:36.339 ************************************ 00:07:36.339 06:32:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.339 06:32:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.339 06:32:15 -- accel/accel.sh@17 -- # local accel_module 00:07:36.339 06:32:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.339 06:32:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.339 06:32:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.339 06:32:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.339 06:32:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.339 06:32:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.339 06:32:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.339 06:32:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.339 06:32:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.339 06:32:15 -- accel/accel.sh@42 -- # jq -r . 00:07:36.339 [2024-07-12 06:32:15.991048] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:36.339 [2024-07-12 06:32:15.991171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68770 ] 00:07:36.339 [2024-07-12 06:32:16.134069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.339 [2024-07-12 06:32:16.181549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.714 06:32:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:37.714 00:07:37.714 SPDK Configuration: 00:07:37.714 Core mask: 0x1 00:07:37.714 00:07:37.714 Accel Perf Configuration: 00:07:37.714 Workload Type: decompress 00:07:37.714 Transfer size: 4096 bytes 00:07:37.714 Vector count 1 00:07:37.714 Module: software 00:07:37.714 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.714 Queue depth: 32 00:07:37.714 Allocate depth: 32 00:07:37.714 # threads/core: 1 00:07:37.714 Run time: 1 seconds 00:07:37.714 Verify: Yes 00:07:37.714 00:07:37.714 Running for 1 seconds... 00:07:37.714 00:07:37.714 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.714 ------------------------------------------------------------------------------------ 00:07:37.714 0,0 63680/s 117 MiB/s 0 0 00:07:37.714 ==================================================================================== 00:07:37.714 Total 63680/s 248 MiB/s 0 0' 00:07:37.714 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.714 06:32:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.714 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.714 06:32:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.714 06:32:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.714 06:32:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.714 06:32:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.714 06:32:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.714 06:32:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.714 06:32:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.714 06:32:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.714 06:32:17 -- accel/accel.sh@42 -- # jq -r . 00:07:37.714 [2024-07-12 06:32:17.334317] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:37.714 [2024-07-12 06:32:17.334405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68795 ] 00:07:37.714 [2024-07-12 06:32:17.465374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.714 [2024-07-12 06:32:17.507724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val=0x1 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val=decompress 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val=software 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val=32 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val=32 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val=1 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val=Yes 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:37.715 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:37.715 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:37.715 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:39.090 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:39.090 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:39.090 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:39.090 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:39.090 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:39.090 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:39.090 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:39.090 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:39.090 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:39.090 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:39.090 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:39.090 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:39.090 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:39.090 06:32:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.090 06:32:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:39.090 06:32:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.090 00:07:39.090 real 0m2.689s 00:07:39.090 user 0m2.308s 00:07:39.090 sys 0m0.173s 00:07:39.090 06:32:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.090 06:32:18 -- common/autotest_common.sh@10 -- # set +x 00:07:39.090 ************************************ 00:07:39.090 END TEST accel_decomp 00:07:39.090 ************************************ 00:07:39.090 06:32:18 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:39.090 06:32:18 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:39.090 06:32:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.090 06:32:18 -- common/autotest_common.sh@10 -- # set +x 00:07:39.090 ************************************ 00:07:39.090 START TEST accel_decmop_full 00:07:39.090 ************************************ 00:07:39.090 06:32:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:39.090 06:32:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.090 06:32:18 -- accel/accel.sh@17 -- # local accel_module 00:07:39.090 06:32:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:39.090 06:32:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:39.090 06:32:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.090 06:32:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.090 06:32:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.090 06:32:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.090 06:32:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.090 06:32:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.091 06:32:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.091 06:32:18 -- accel/accel.sh@42 -- # jq -r . 00:07:39.091 [2024-07-12 06:32:18.723198] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:39.091 [2024-07-12 06:32:18.723324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68824 ] 00:07:39.091 [2024-07-12 06:32:18.862678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.091 [2024-07-12 06:32:18.905245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.466 06:32:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:40.466 00:07:40.466 SPDK Configuration: 00:07:40.466 Core mask: 0x1 00:07:40.466 00:07:40.466 Accel Perf Configuration: 00:07:40.466 Workload Type: decompress 00:07:40.466 Transfer size: 111250 bytes 00:07:40.466 Vector count 1 00:07:40.466 Module: software 00:07:40.466 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.466 Queue depth: 32 00:07:40.466 Allocate depth: 32 00:07:40.466 # threads/core: 1 00:07:40.466 Run time: 1 seconds 00:07:40.466 Verify: Yes 00:07:40.466 00:07:40.466 Running for 1 seconds... 00:07:40.466 00:07:40.466 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.466 ------------------------------------------------------------------------------------ 00:07:40.466 0,0 4000/s 165 MiB/s 0 0 00:07:40.466 ==================================================================================== 00:07:40.466 Total 4000/s 424 MiB/s 0 0' 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.466 06:32:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.466 06:32:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.466 06:32:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.466 06:32:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.466 06:32:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.466 06:32:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.466 06:32:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.466 06:32:20 -- accel/accel.sh@42 -- # jq -r . 00:07:40.466 [2024-07-12 06:32:20.085590] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:40.466 [2024-07-12 06:32:20.085720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68838 ] 00:07:40.466 [2024-07-12 06:32:20.229652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.466 [2024-07-12 06:32:20.264644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val=0x1 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val=decompress 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val=software 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val=32 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val=32 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val=1 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val=Yes 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:40.466 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:40.466 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:40.466 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:41.841 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:41.841 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.841 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:41.841 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.841 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:41.841 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.841 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:41.841 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.841 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:41.841 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.841 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:41.841 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:41.841 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:41.841 06:32:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:41.841 ************************************ 00:07:41.842 END TEST accel_decmop_full 00:07:41.842 ************************************ 00:07:41.842 06:32:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:41.842 06:32:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.842 00:07:41.842 real 0m2.717s 00:07:41.842 user 0m2.343s 00:07:41.842 sys 0m0.163s 00:07:41.842 06:32:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.842 06:32:21 -- common/autotest_common.sh@10 -- # set +x 00:07:41.842 06:32:21 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.842 06:32:21 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:41.842 06:32:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.842 06:32:21 -- common/autotest_common.sh@10 -- # set +x 00:07:41.842 ************************************ 00:07:41.842 START TEST accel_decomp_mcore 00:07:41.842 ************************************ 00:07:41.842 06:32:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.842 06:32:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.842 06:32:21 -- accel/accel.sh@17 -- # local accel_module 00:07:41.842 06:32:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.842 06:32:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.842 06:32:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.842 06:32:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.842 06:32:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.842 06:32:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.842 06:32:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.842 06:32:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.842 06:32:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.842 06:32:21 -- accel/accel.sh@42 -- # jq -r . 00:07:41.842 [2024-07-12 06:32:21.479105] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:41.842 [2024-07-12 06:32:21.479232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68878 ] 00:07:41.842 [2024-07-12 06:32:21.619604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.842 [2024-07-12 06:32:21.662712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.842 [2024-07-12 06:32:21.662789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.842 [2024-07-12 06:32:21.662851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.842 [2024-07-12 06:32:21.662854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.218 06:32:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:43.218 00:07:43.218 SPDK Configuration: 00:07:43.218 Core mask: 0xf 00:07:43.218 00:07:43.218 Accel Perf Configuration: 00:07:43.218 Workload Type: decompress 00:07:43.218 Transfer size: 4096 bytes 00:07:43.218 Vector count 1 00:07:43.218 Module: software 00:07:43.218 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.218 Queue depth: 32 00:07:43.218 Allocate depth: 32 00:07:43.218 # threads/core: 1 00:07:43.218 Run time: 1 seconds 00:07:43.218 Verify: Yes 00:07:43.218 00:07:43.218 Running for 1 seconds... 00:07:43.218 00:07:43.218 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.218 ------------------------------------------------------------------------------------ 00:07:43.218 0,0 47456/s 87 MiB/s 0 0 00:07:43.218 3,0 42240/s 77 MiB/s 0 0 00:07:43.218 2,0 38176/s 70 MiB/s 0 0 00:07:43.218 1,0 38208/s 70 MiB/s 0 0 00:07:43.218 ==================================================================================== 00:07:43.218 Total 166080/s 648 MiB/s 0 0' 00:07:43.218 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.218 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.218 06:32:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.218 06:32:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.218 06:32:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.218 06:32:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.218 06:32:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.218 06:32:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.218 06:32:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.218 06:32:22 -- accel/accel.sh@42 -- # jq -r . 00:07:43.218 [2024-07-12 06:32:22.828512] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:43.218 [2024-07-12 06:32:22.828597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68895 ] 00:07:43.218 [2024-07-12 06:32:22.962233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.218 [2024-07-12 06:32:23.003149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.218 [2024-07-12 06:32:23.003227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.218 [2024-07-12 06:32:23.003319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.218 [2024-07-12 06:32:23.003322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val=0xf 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val=decompress 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val=software 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val=32 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val=32 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val=1 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val=Yes 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:43.218 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:43.218 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:43.218 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:44.592 06:32:24 -- accel/accel.sh@21 -- # val= 00:07:44.592 06:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.592 06:32:24 -- accel/accel.sh@21 -- # val= 00:07:44.592 06:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.592 06:32:24 -- accel/accel.sh@21 -- # val= 00:07:44.592 06:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.592 06:32:24 -- accel/accel.sh@21 -- # val= 00:07:44.592 06:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.592 06:32:24 -- accel/accel.sh@21 -- # val= 00:07:44.592 06:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.592 06:32:24 -- accel/accel.sh@21 -- # val= 00:07:44.592 06:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.592 06:32:24 -- accel/accel.sh@21 -- # val= 00:07:44.592 06:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.592 06:32:24 -- accel/accel.sh@21 -- # val= 00:07:44.592 06:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.592 06:32:24 -- accel/accel.sh@21 -- # val= 00:07:44.592 06:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # IFS=: 00:07:44.592 06:32:24 -- accel/accel.sh@20 -- # read -r var val 00:07:44.592 06:32:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:44.592 06:32:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:44.592 06:32:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.592 00:07:44.592 real 0m2.698s 00:07:44.592 user 0m8.698s 00:07:44.592 sys 0m0.193s 00:07:44.592 06:32:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.592 06:32:24 -- common/autotest_common.sh@10 -- # set +x 00:07:44.592 ************************************ 00:07:44.592 END TEST accel_decomp_mcore 00:07:44.592 ************************************ 00:07:44.592 06:32:24 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.592 06:32:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:44.592 06:32:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.592 06:32:24 -- common/autotest_common.sh@10 -- # set +x 00:07:44.592 ************************************ 00:07:44.592 START TEST accel_decomp_full_mcore 00:07:44.592 ************************************ 00:07:44.592 06:32:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.592 06:32:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.592 06:32:24 -- accel/accel.sh@17 -- # local accel_module 00:07:44.592 06:32:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.592 06:32:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.592 06:32:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.592 06:32:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.592 06:32:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.592 06:32:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.592 06:32:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.592 06:32:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.592 06:32:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.592 06:32:24 -- accel/accel.sh@42 -- # jq -r . 00:07:44.592 [2024-07-12 06:32:24.218740] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:44.592 [2024-07-12 06:32:24.218841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68927 ] 00:07:44.592 [2024-07-12 06:32:24.357922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.592 [2024-07-12 06:32:24.397949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.592 [2024-07-12 06:32:24.398058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.592 [2024-07-12 06:32:24.398159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.592 [2024-07-12 06:32:24.398161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.968 06:32:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:45.968 00:07:45.968 SPDK Configuration: 00:07:45.968 Core mask: 0xf 00:07:45.968 00:07:45.968 Accel Perf Configuration: 00:07:45.968 Workload Type: decompress 00:07:45.968 Transfer size: 111250 bytes 00:07:45.968 Vector count 1 00:07:45.968 Module: software 00:07:45.968 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.968 Queue depth: 32 00:07:45.968 Allocate depth: 32 00:07:45.968 # threads/core: 1 00:07:45.968 Run time: 1 seconds 00:07:45.968 Verify: Yes 00:07:45.968 00:07:45.968 Running for 1 seconds... 00:07:45.968 00:07:45.968 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.968 ------------------------------------------------------------------------------------ 00:07:45.968 0,0 3872/s 159 MiB/s 0 0 00:07:45.968 3,0 4160/s 171 MiB/s 0 0 00:07:45.968 2,0 3808/s 157 MiB/s 0 0 00:07:45.968 1,0 4096/s 169 MiB/s 0 0 00:07:45.968 ==================================================================================== 00:07:45.968 Total 15936/s 1690 MiB/s 0 0' 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.968 06:32:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.968 06:32:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.968 06:32:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.968 06:32:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.968 06:32:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.968 06:32:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.968 06:32:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.968 06:32:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.968 06:32:25 -- accel/accel.sh@42 -- # jq -r . 00:07:45.968 [2024-07-12 06:32:25.591393] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:45.968 [2024-07-12 06:32:25.591473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68954 ] 00:07:45.968 [2024-07-12 06:32:25.729770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.968 [2024-07-12 06:32:25.769541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.968 [2024-07-12 06:32:25.769631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.968 [2024-07-12 06:32:25.769742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.968 [2024-07-12 06:32:25.769747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val=0xf 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val=decompress 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val=software 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val=32 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.968 06:32:25 -- accel/accel.sh@21 -- # val=32 00:07:45.968 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.968 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.969 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.969 06:32:25 -- accel/accel.sh@21 -- # val=1 00:07:45.969 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.969 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.969 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.969 06:32:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.969 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.969 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.969 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.969 06:32:25 -- accel/accel.sh@21 -- # val=Yes 00:07:45.969 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.969 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.969 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.969 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:45.969 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.969 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.969 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:45.969 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:45.969 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.969 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:45.969 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.342 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:47.343 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:47.343 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:47.343 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:47.343 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:47.343 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:47.343 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:47.343 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:47.343 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:47.343 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:47.343 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:47.343 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:47.343 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:47.343 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:47.343 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:47.343 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:47.343 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:47.343 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:47.343 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:47.343 06:32:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.343 06:32:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:47.343 06:32:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.343 00:07:47.343 real 0m2.730s 00:07:47.343 user 0m8.865s 00:07:47.343 sys 0m0.185s 00:07:47.343 06:32:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.343 ************************************ 00:07:47.343 END TEST accel_decomp_full_mcore 00:07:47.343 ************************************ 00:07:47.343 06:32:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.343 06:32:26 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.343 06:32:26 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:47.343 06:32:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.343 06:32:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.343 ************************************ 00:07:47.343 START TEST accel_decomp_mthread 00:07:47.343 ************************************ 00:07:47.343 06:32:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.343 06:32:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.343 06:32:26 -- accel/accel.sh@17 -- # local accel_module 00:07:47.343 06:32:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.343 06:32:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.343 06:32:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.343 06:32:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.343 06:32:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.343 06:32:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.343 06:32:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.343 06:32:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.343 06:32:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.343 06:32:26 -- accel/accel.sh@42 -- # jq -r . 00:07:47.343 [2024-07-12 06:32:26.995264] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:47.343 [2024-07-12 06:32:26.995370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68987 ] 00:07:47.343 [2024-07-12 06:32:27.132666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.343 [2024-07-12 06:32:27.167201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.717 06:32:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:48.717 00:07:48.717 SPDK Configuration: 00:07:48.717 Core mask: 0x1 00:07:48.717 00:07:48.717 Accel Perf Configuration: 00:07:48.717 Workload Type: decompress 00:07:48.717 Transfer size: 4096 bytes 00:07:48.717 Vector count 1 00:07:48.717 Module: software 00:07:48.717 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.717 Queue depth: 32 00:07:48.717 Allocate depth: 32 00:07:48.717 # threads/core: 2 00:07:48.717 Run time: 1 seconds 00:07:48.717 Verify: Yes 00:07:48.717 00:07:48.717 Running for 1 seconds... 00:07:48.717 00:07:48.717 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:48.717 ------------------------------------------------------------------------------------ 00:07:48.717 0,1 32448/s 59 MiB/s 0 0 00:07:48.717 0,0 32352/s 59 MiB/s 0 0 00:07:48.717 ==================================================================================== 00:07:48.717 Total 64800/s 253 MiB/s 0 0' 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:48.717 06:32:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.717 06:32:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.717 06:32:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.717 06:32:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.717 06:32:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.717 06:32:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.717 06:32:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.717 06:32:28 -- accel/accel.sh@42 -- # jq -r . 00:07:48.717 [2024-07-12 06:32:28.324268] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:48.717 [2024-07-12 06:32:28.324371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69001 ] 00:07:48.717 [2024-07-12 06:32:28.466410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.717 [2024-07-12 06:32:28.506796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val= 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val= 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val= 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val=0x1 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val= 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val= 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val=decompress 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val= 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val=software 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val=32 00:07:48.717 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.717 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.717 06:32:28 -- accel/accel.sh@21 -- # val=32 00:07:48.718 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.718 06:32:28 -- accel/accel.sh@21 -- # val=2 00:07:48.718 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.718 06:32:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:48.718 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.718 06:32:28 -- accel/accel.sh@21 -- # val=Yes 00:07:48.718 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.718 06:32:28 -- accel/accel.sh@21 -- # val= 00:07:48.718 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:48.718 06:32:28 -- accel/accel.sh@21 -- # val= 00:07:48.718 06:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # IFS=: 00:07:48.718 06:32:28 -- accel/accel.sh@20 -- # read -r var val 00:07:50.135 06:32:29 -- accel/accel.sh@21 -- # val= 00:07:50.135 06:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # IFS=: 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # read -r var val 00:07:50.135 06:32:29 -- accel/accel.sh@21 -- # val= 00:07:50.135 06:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # IFS=: 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # read -r var val 00:07:50.135 06:32:29 -- accel/accel.sh@21 -- # val= 00:07:50.135 06:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # IFS=: 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # read -r var val 00:07:50.135 06:32:29 -- accel/accel.sh@21 -- # val= 00:07:50.135 06:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # IFS=: 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # read -r var val 00:07:50.135 06:32:29 -- accel/accel.sh@21 -- # val= 00:07:50.135 06:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # IFS=: 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # read -r var val 00:07:50.135 06:32:29 -- accel/accel.sh@21 -- # val= 00:07:50.135 06:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # IFS=: 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # read -r var val 00:07:50.135 06:32:29 -- accel/accel.sh@21 -- # val= 00:07:50.135 06:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # IFS=: 00:07:50.135 06:32:29 -- accel/accel.sh@20 -- # read -r var val 00:07:50.135 06:32:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:50.135 06:32:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:50.135 ************************************ 00:07:50.135 END TEST accel_decomp_mthread 00:07:50.135 ************************************ 00:07:50.135 06:32:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.135 00:07:50.135 real 0m2.689s 00:07:50.135 user 0m2.329s 00:07:50.135 sys 0m0.152s 00:07:50.135 06:32:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.135 06:32:29 -- common/autotest_common.sh@10 -- # set +x 00:07:50.135 06:32:29 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.135 06:32:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:50.135 06:32:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.135 06:32:29 -- common/autotest_common.sh@10 -- # set +x 00:07:50.135 ************************************ 00:07:50.135 START TEST accel_deomp_full_mthread 00:07:50.135 ************************************ 00:07:50.135 06:32:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.135 06:32:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.135 06:32:29 -- accel/accel.sh@17 -- # local accel_module 00:07:50.135 06:32:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.135 06:32:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.135 06:32:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.135 06:32:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.135 06:32:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.135 06:32:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.135 06:32:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.135 06:32:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.135 06:32:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.135 06:32:29 -- accel/accel.sh@42 -- # jq -r . 00:07:50.135 [2024-07-12 06:32:29.737630] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:50.135 [2024-07-12 06:32:29.737702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69036 ] 00:07:50.135 [2024-07-12 06:32:29.873186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.135 [2024-07-12 06:32:29.909657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.512 06:32:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:51.512 00:07:51.512 SPDK Configuration: 00:07:51.512 Core mask: 0x1 00:07:51.512 00:07:51.512 Accel Perf Configuration: 00:07:51.512 Workload Type: decompress 00:07:51.512 Transfer size: 111250 bytes 00:07:51.512 Vector count 1 00:07:51.512 Module: software 00:07:51.512 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.512 Queue depth: 32 00:07:51.512 Allocate depth: 32 00:07:51.512 # threads/core: 2 00:07:51.512 Run time: 1 seconds 00:07:51.512 Verify: Yes 00:07:51.512 00:07:51.512 Running for 1 seconds... 00:07:51.512 00:07:51.512 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:51.512 ------------------------------------------------------------------------------------ 00:07:51.512 0,1 2048/s 84 MiB/s 0 0 00:07:51.512 0,0 2016/s 83 MiB/s 0 0 00:07:51.512 ==================================================================================== 00:07:51.512 Total 4064/s 431 MiB/s 0 0' 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.512 06:32:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.512 06:32:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.512 06:32:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.512 06:32:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.512 06:32:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.512 06:32:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.512 06:32:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.512 06:32:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.512 06:32:31 -- accel/accel.sh@42 -- # jq -r . 00:07:51.512 [2024-07-12 06:32:31.111734] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:51.512 [2024-07-12 06:32:31.111834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69055 ] 00:07:51.512 [2024-07-12 06:32:31.252102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.512 [2024-07-12 06:32:31.293167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val= 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val= 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val= 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val=0x1 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val= 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val= 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val=decompress 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val= 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val=software 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val=32 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val=32 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val=2 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val=Yes 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val= 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:51.512 06:32:31 -- accel/accel.sh@21 -- # val= 00:07:51.512 06:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # IFS=: 00:07:51.512 06:32:31 -- accel/accel.sh@20 -- # read -r var val 00:07:52.888 06:32:32 -- accel/accel.sh@21 -- # val= 00:07:52.888 06:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.888 06:32:32 -- accel/accel.sh@20 -- # IFS=: 00:07:52.888 06:32:32 -- accel/accel.sh@20 -- # read -r var val 00:07:52.888 06:32:32 -- accel/accel.sh@21 -- # val= 00:07:52.888 06:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.888 06:32:32 -- accel/accel.sh@20 -- # IFS=: 00:07:52.888 06:32:32 -- accel/accel.sh@20 -- # read -r var val 00:07:52.888 06:32:32 -- accel/accel.sh@21 -- # val= 00:07:52.888 06:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.888 06:32:32 -- accel/accel.sh@20 -- # IFS=: 00:07:52.888 06:32:32 -- accel/accel.sh@20 -- # read -r var val 00:07:52.888 06:32:32 -- accel/accel.sh@21 -- # val= 00:07:52.888 06:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.888 06:32:32 -- accel/accel.sh@20 -- # IFS=: 00:07:52.888 06:32:32 -- accel/accel.sh@20 -- # read -r var val 00:07:52.888 06:32:32 -- accel/accel.sh@21 -- # val= 00:07:52.889 06:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.889 06:32:32 -- accel/accel.sh@20 -- # IFS=: 00:07:52.889 06:32:32 -- accel/accel.sh@20 -- # read -r var val 00:07:52.889 06:32:32 -- accel/accel.sh@21 -- # val= 00:07:52.889 06:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.889 06:32:32 -- accel/accel.sh@20 -- # IFS=: 00:07:52.889 06:32:32 -- accel/accel.sh@20 -- # read -r var val 00:07:52.889 06:32:32 -- accel/accel.sh@21 -- # val= 00:07:52.889 06:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.889 06:32:32 -- accel/accel.sh@20 -- # IFS=: 00:07:52.889 ************************************ 00:07:52.889 END TEST accel_deomp_full_mthread 00:07:52.889 ************************************ 00:07:52.889 06:32:32 -- accel/accel.sh@20 -- # read -r var val 00:07:52.889 06:32:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:52.889 06:32:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:52.889 06:32:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.889 00:07:52.889 real 0m2.759s 00:07:52.889 user 0m2.403s 00:07:52.889 sys 0m0.147s 00:07:52.889 06:32:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.889 06:32:32 -- common/autotest_common.sh@10 -- # set +x 00:07:52.889 06:32:32 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:52.889 06:32:32 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:52.889 06:32:32 -- accel/accel.sh@129 -- # build_accel_config 00:07:52.889 06:32:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:52.889 06:32:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.889 06:32:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.889 06:32:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.889 06:32:32 -- common/autotest_common.sh@10 -- # set +x 00:07:52.889 06:32:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.889 06:32:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.889 06:32:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.889 06:32:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.889 06:32:32 -- accel/accel.sh@42 -- # jq -r . 00:07:52.889 ************************************ 00:07:52.889 START TEST accel_dif_functional_tests 00:07:52.889 ************************************ 00:07:52.889 06:32:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:52.889 [2024-07-12 06:32:32.571466] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:52.889 [2024-07-12 06:32:32.571575] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69092 ] 00:07:52.889 [2024-07-12 06:32:32.708619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.889 [2024-07-12 06:32:32.744976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.889 [2024-07-12 06:32:32.745012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.889 [2024-07-12 06:32:32.745015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.889 00:07:52.889 00:07:52.889 CUnit - A unit testing framework for C - Version 2.1-3 00:07:52.889 http://cunit.sourceforge.net/ 00:07:52.889 00:07:52.889 00:07:52.889 Suite: accel_dif 00:07:52.889 Test: verify: DIF generated, GUARD check ...passed 00:07:52.889 Test: verify: DIF generated, APPTAG check ...passed 00:07:52.889 Test: verify: DIF generated, REFTAG check ...passed 00:07:52.889 Test: verify: DIF not generated, GUARD check ...[2024-07-12 06:32:32.793592] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:52.889 passed[2024-07-12 06:32:32.793855] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:52.889 00:07:52.889 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 06:32:32.794093] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:52.889 [2024-07-12 06:32:32.794299] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:52.889 passed 00:07:52.889 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 06:32:32.794488] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:52.889 [2024-07-12 06:32:32.794846] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:52.889 passed 00:07:52.889 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:52.889 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 06:32:32.795312] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:52.889 passed 00:07:52.889 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:52.889 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:52.889 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:52.889 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 06:32:32.796444] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:52.889 passed 00:07:52.889 Test: generate copy: DIF generated, GUARD check ...passed 00:07:52.889 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:52.889 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:52.889 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:52.889 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:52.889 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:52.889 Test: generate copy: iovecs-len validate ...[2024-07-12 06:32:32.797400] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:52.889 passed 00:07:52.889 Test: generate copy: buffer alignment validate ...passed 00:07:52.889 00:07:52.889 Run Summary: Type Total Ran Passed Failed Inactive 00:07:52.889 suites 1 1 n/a 0 0 00:07:52.889 tests 20 20 20 0 0 00:07:52.889 asserts 204 204 204 0 n/a 00:07:52.889 00:07:52.889 Elapsed time = 0.011 seconds 00:07:53.147 00:07:53.147 real 0m0.404s 00:07:53.147 user 0m0.476s 00:07:53.147 sys 0m0.101s 00:07:53.147 06:32:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.147 06:32:32 -- common/autotest_common.sh@10 -- # set +x 00:07:53.147 ************************************ 00:07:53.147 END TEST accel_dif_functional_tests 00:07:53.147 ************************************ 00:07:53.147 00:07:53.147 real 0m57.528s 00:07:53.147 user 1m2.416s 00:07:53.147 sys 0m4.465s 00:07:53.147 ************************************ 00:07:53.147 END TEST accel 00:07:53.147 ************************************ 00:07:53.147 06:32:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.147 06:32:32 -- common/autotest_common.sh@10 -- # set +x 00:07:53.147 06:32:32 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:53.147 06:32:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:53.147 06:32:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.147 06:32:32 -- common/autotest_common.sh@10 -- # set +x 00:07:53.147 ************************************ 00:07:53.147 START TEST accel_rpc 00:07:53.147 ************************************ 00:07:53.148 06:32:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:53.148 * Looking for test storage... 00:07:53.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:53.405 06:32:33 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:53.405 06:32:33 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=69159 00:07:53.405 06:32:33 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:53.405 06:32:33 -- accel/accel_rpc.sh@15 -- # waitforlisten 69159 00:07:53.405 06:32:33 -- common/autotest_common.sh@819 -- # '[' -z 69159 ']' 00:07:53.405 06:32:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.405 06:32:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:53.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.405 06:32:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.405 06:32:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:53.405 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.405 [2024-07-12 06:32:33.122327] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:53.405 [2024-07-12 06:32:33.122424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69159 ] 00:07:53.405 [2024-07-12 06:32:33.255331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.405 [2024-07-12 06:32:33.290856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:53.405 [2024-07-12 06:32:33.291058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.663 06:32:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:53.663 06:32:33 -- common/autotest_common.sh@852 -- # return 0 00:07:53.663 06:32:33 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:53.663 06:32:33 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:53.663 06:32:33 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:53.663 06:32:33 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:53.663 06:32:33 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:53.663 06:32:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:53.663 06:32:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.663 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.663 ************************************ 00:07:53.663 START TEST accel_assign_opcode 00:07:53.663 ************************************ 00:07:53.663 06:32:33 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:53.663 06:32:33 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:53.663 06:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.663 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.663 [2024-07-12 06:32:33.351460] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:53.663 06:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.663 06:32:33 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:53.663 06:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.663 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.663 [2024-07-12 06:32:33.359451] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:53.663 06:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.663 06:32:33 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:53.663 06:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.663 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.663 06:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.663 06:32:33 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:53.663 06:32:33 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:53.663 06:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.663 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.663 06:32:33 -- accel/accel_rpc.sh@42 -- # grep software 00:07:53.663 06:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.663 software 00:07:53.663 00:07:53.663 real 0m0.222s 00:07:53.663 user 0m0.071s 00:07:53.663 sys 0m0.013s 00:07:53.663 06:32:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.663 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.663 ************************************ 00:07:53.663 END TEST accel_assign_opcode 00:07:53.663 ************************************ 00:07:53.921 06:32:33 -- accel/accel_rpc.sh@55 -- # killprocess 69159 00:07:53.921 06:32:33 -- common/autotest_common.sh@926 -- # '[' -z 69159 ']' 00:07:53.921 06:32:33 -- common/autotest_common.sh@930 -- # kill -0 69159 00:07:53.921 06:32:33 -- common/autotest_common.sh@931 -- # uname 00:07:53.921 06:32:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:53.921 06:32:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69159 00:07:53.921 06:32:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:53.921 06:32:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:53.921 killing process with pid 69159 00:07:53.921 06:32:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69159' 00:07:53.921 06:32:33 -- common/autotest_common.sh@945 -- # kill 69159 00:07:53.921 06:32:33 -- common/autotest_common.sh@950 -- # wait 69159 00:07:54.179 00:07:54.179 real 0m0.852s 00:07:54.179 user 0m0.859s 00:07:54.179 sys 0m0.278s 00:07:54.179 06:32:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.179 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:54.179 ************************************ 00:07:54.179 END TEST accel_rpc 00:07:54.179 ************************************ 00:07:54.179 06:32:33 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:54.179 06:32:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:54.179 06:32:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.179 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:54.179 ************************************ 00:07:54.179 START TEST app_cmdline 00:07:54.179 ************************************ 00:07:54.179 06:32:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:54.179 * Looking for test storage... 00:07:54.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:54.179 06:32:33 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:54.179 06:32:33 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69233 00:07:54.179 06:32:33 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:54.179 06:32:33 -- app/cmdline.sh@18 -- # waitforlisten 69233 00:07:54.179 06:32:33 -- common/autotest_common.sh@819 -- # '[' -z 69233 ']' 00:07:54.179 06:32:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.179 06:32:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:54.179 06:32:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.179 06:32:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:54.179 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:54.179 [2024-07-12 06:32:34.043906] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:54.179 [2024-07-12 06:32:34.044042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69233 ] 00:07:54.436 [2024-07-12 06:32:34.192324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.436 [2024-07-12 06:32:34.230173] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.436 [2024-07-12 06:32:34.230330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.462 06:32:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:55.462 06:32:34 -- common/autotest_common.sh@852 -- # return 0 00:07:55.462 06:32:34 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:55.462 { 00:07:55.462 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:07:55.462 "fields": { 00:07:55.462 "major": 24, 00:07:55.462 "minor": 1, 00:07:55.462 "patch": 1, 00:07:55.462 "suffix": "-pre", 00:07:55.462 "commit": "4b94202c6" 00:07:55.462 } 00:07:55.462 } 00:07:55.462 06:32:35 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:55.462 06:32:35 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:55.462 06:32:35 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:55.462 06:32:35 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:55.462 06:32:35 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:55.462 06:32:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.462 06:32:35 -- common/autotest_common.sh@10 -- # set +x 00:07:55.462 06:32:35 -- app/cmdline.sh@26 -- # sort 00:07:55.462 06:32:35 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:55.462 06:32:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.462 06:32:35 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:55.462 06:32:35 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:55.462 06:32:35 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.462 06:32:35 -- common/autotest_common.sh@640 -- # local es=0 00:07:55.462 06:32:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.462 06:32:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.462 06:32:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:55.462 06:32:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.462 06:32:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:55.462 06:32:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.462 06:32:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:55.462 06:32:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.462 06:32:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:55.462 06:32:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.720 request: 00:07:55.720 { 00:07:55.720 "method": "env_dpdk_get_mem_stats", 00:07:55.720 "req_id": 1 00:07:55.720 } 00:07:55.720 Got JSON-RPC error response 00:07:55.720 response: 00:07:55.720 { 00:07:55.720 "code": -32601, 00:07:55.720 "message": "Method not found" 00:07:55.720 } 00:07:55.720 06:32:35 -- common/autotest_common.sh@643 -- # es=1 00:07:55.720 06:32:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:55.720 06:32:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:55.720 06:32:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:55.720 06:32:35 -- app/cmdline.sh@1 -- # killprocess 69233 00:07:55.720 06:32:35 -- common/autotest_common.sh@926 -- # '[' -z 69233 ']' 00:07:55.720 06:32:35 -- common/autotest_common.sh@930 -- # kill -0 69233 00:07:55.720 06:32:35 -- common/autotest_common.sh@931 -- # uname 00:07:55.720 06:32:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:55.720 06:32:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69233 00:07:55.720 06:32:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:55.720 06:32:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:55.720 killing process with pid 69233 00:07:55.720 06:32:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69233' 00:07:55.720 06:32:35 -- common/autotest_common.sh@945 -- # kill 69233 00:07:55.720 06:32:35 -- common/autotest_common.sh@950 -- # wait 69233 00:07:55.978 00:07:55.978 real 0m1.944s 00:07:55.978 user 0m2.595s 00:07:55.978 sys 0m0.354s 00:07:55.978 06:32:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.978 ************************************ 00:07:55.978 END TEST app_cmdline 00:07:55.978 ************************************ 00:07:55.978 06:32:35 -- common/autotest_common.sh@10 -- # set +x 00:07:55.978 06:32:35 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:55.978 06:32:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:55.978 06:32:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.978 06:32:35 -- common/autotest_common.sh@10 -- # set +x 00:07:55.978 ************************************ 00:07:55.978 START TEST version 00:07:55.978 ************************************ 00:07:55.978 06:32:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:56.236 * Looking for test storage... 00:07:56.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:56.236 06:32:35 -- app/version.sh@17 -- # get_header_version major 00:07:56.236 06:32:35 -- app/version.sh@14 -- # cut -f2 00:07:56.236 06:32:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.236 06:32:35 -- app/version.sh@14 -- # tr -d '"' 00:07:56.236 06:32:35 -- app/version.sh@17 -- # major=24 00:07:56.236 06:32:35 -- app/version.sh@18 -- # get_header_version minor 00:07:56.236 06:32:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.236 06:32:35 -- app/version.sh@14 -- # cut -f2 00:07:56.236 06:32:35 -- app/version.sh@14 -- # tr -d '"' 00:07:56.236 06:32:35 -- app/version.sh@18 -- # minor=1 00:07:56.237 06:32:35 -- app/version.sh@19 -- # get_header_version patch 00:07:56.237 06:32:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.237 06:32:35 -- app/version.sh@14 -- # tr -d '"' 00:07:56.237 06:32:35 -- app/version.sh@14 -- # cut -f2 00:07:56.237 06:32:35 -- app/version.sh@19 -- # patch=1 00:07:56.237 06:32:35 -- app/version.sh@20 -- # get_header_version suffix 00:07:56.237 06:32:35 -- app/version.sh@14 -- # cut -f2 00:07:56.237 06:32:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.237 06:32:35 -- app/version.sh@14 -- # tr -d '"' 00:07:56.237 06:32:35 -- app/version.sh@20 -- # suffix=-pre 00:07:56.237 06:32:35 -- app/version.sh@22 -- # version=24.1 00:07:56.237 06:32:35 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:56.237 06:32:35 -- app/version.sh@25 -- # version=24.1.1 00:07:56.237 06:32:35 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:56.237 06:32:35 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:56.237 06:32:35 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:56.237 06:32:36 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:56.237 06:32:36 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:56.237 00:07:56.237 real 0m0.133s 00:07:56.237 user 0m0.075s 00:07:56.237 sys 0m0.084s 00:07:56.237 06:32:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.237 06:32:36 -- common/autotest_common.sh@10 -- # set +x 00:07:56.237 ************************************ 00:07:56.237 END TEST version 00:07:56.237 ************************************ 00:07:56.237 06:32:36 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:56.237 06:32:36 -- spdk/autotest.sh@204 -- # uname -s 00:07:56.237 06:32:36 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:56.237 06:32:36 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:56.237 06:32:36 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:07:56.237 06:32:36 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:07:56.237 06:32:36 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:56.237 06:32:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:56.237 06:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.237 06:32:36 -- common/autotest_common.sh@10 -- # set +x 00:07:56.237 ************************************ 00:07:56.237 START TEST spdk_dd 00:07:56.237 ************************************ 00:07:56.237 06:32:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:56.237 * Looking for test storage... 00:07:56.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:56.237 06:32:36 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.237 06:32:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.237 06:32:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.237 06:32:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.237 06:32:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.237 06:32:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.237 06:32:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.237 06:32:36 -- paths/export.sh@5 -- # export PATH 00:07:56.237 06:32:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.237 06:32:36 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:56.495 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:56.754 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:56.754 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:56.754 06:32:36 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:56.754 06:32:36 -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:56.754 06:32:36 -- scripts/common.sh@311 -- # local bdf bdfs 00:07:56.754 06:32:36 -- scripts/common.sh@312 -- # local nvmes 00:07:56.754 06:32:36 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:07:56.754 06:32:36 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:56.754 06:32:36 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:07:56.754 06:32:36 -- scripts/common.sh@297 -- # local bdf= 00:07:56.754 06:32:36 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:07:56.754 06:32:36 -- scripts/common.sh@232 -- # local class 00:07:56.754 06:32:36 -- scripts/common.sh@233 -- # local subclass 00:07:56.754 06:32:36 -- scripts/common.sh@234 -- # local progif 00:07:56.754 06:32:36 -- scripts/common.sh@235 -- # printf %02x 1 00:07:56.754 06:32:36 -- scripts/common.sh@235 -- # class=01 00:07:56.754 06:32:36 -- scripts/common.sh@236 -- # printf %02x 8 00:07:56.754 06:32:36 -- scripts/common.sh@236 -- # subclass=08 00:07:56.754 06:32:36 -- scripts/common.sh@237 -- # printf %02x 2 00:07:56.754 06:32:36 -- scripts/common.sh@237 -- # progif=02 00:07:56.754 06:32:36 -- scripts/common.sh@239 -- # hash lspci 00:07:56.754 06:32:36 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:07:56.755 06:32:36 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:07:56.755 06:32:36 -- scripts/common.sh@242 -- # grep -i -- -p02 00:07:56.755 06:32:36 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:56.755 06:32:36 -- scripts/common.sh@244 -- # tr -d '"' 00:07:56.755 06:32:36 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:56.755 06:32:36 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:07:56.755 06:32:36 -- scripts/common.sh@15 -- # local i 00:07:56.755 06:32:36 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:07:56.755 06:32:36 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:56.755 06:32:36 -- scripts/common.sh@24 -- # return 0 00:07:56.755 06:32:36 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:07:56.755 06:32:36 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:56.755 06:32:36 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:07:56.755 06:32:36 -- scripts/common.sh@15 -- # local i 00:07:56.755 06:32:36 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:07:56.755 06:32:36 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:56.755 06:32:36 -- scripts/common.sh@24 -- # return 0 00:07:56.755 06:32:36 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:07:56.755 06:32:36 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:56.755 06:32:36 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:07:56.755 06:32:36 -- scripts/common.sh@322 -- # uname -s 00:07:56.755 06:32:36 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:56.755 06:32:36 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:56.755 06:32:36 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:56.755 06:32:36 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:07:56.755 06:32:36 -- scripts/common.sh@322 -- # uname -s 00:07:56.755 06:32:36 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:56.755 06:32:36 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:56.755 06:32:36 -- scripts/common.sh@327 -- # (( 2 )) 00:07:56.755 06:32:36 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:56.755 06:32:36 -- dd/dd.sh@13 -- # check_liburing 00:07:56.755 06:32:36 -- dd/common.sh@139 -- # local lib so 00:07:56.755 06:32:36 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:56.755 06:32:36 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:56.755 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.755 06:32:36 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:56.756 06:32:36 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:56.756 06:32:36 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:56.756 * spdk_dd linked to liburing 00:07:56.756 06:32:36 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:56.756 06:32:36 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:56.756 06:32:36 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:56.756 06:32:36 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:56.756 06:32:36 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:56.756 06:32:36 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:56.756 06:32:36 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:56.756 06:32:36 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:56.756 06:32:36 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:56.756 06:32:36 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:56.756 06:32:36 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:56.756 06:32:36 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:56.756 06:32:36 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:56.756 06:32:36 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:56.756 06:32:36 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:56.756 06:32:36 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:56.756 06:32:36 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:56.756 06:32:36 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:56.756 06:32:36 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:56.756 06:32:36 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:56.756 06:32:36 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:56.756 06:32:36 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:56.756 06:32:36 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:56.756 06:32:36 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:56.756 06:32:36 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:56.756 06:32:36 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:56.756 06:32:36 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:56.756 06:32:36 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:56.756 06:32:36 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:56.756 06:32:36 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:56.756 06:32:36 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:56.756 06:32:36 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:56.756 06:32:36 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:56.756 06:32:36 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:56.756 06:32:36 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:56.756 06:32:36 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:56.756 06:32:36 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:56.756 06:32:36 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:56.756 06:32:36 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:56.756 06:32:36 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:56.756 06:32:36 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:56.756 06:32:36 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:56.756 06:32:36 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:56.756 06:32:36 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:56.756 06:32:36 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:56.756 06:32:36 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:56.756 06:32:36 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:56.756 06:32:36 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:56.756 06:32:36 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:56.756 06:32:36 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:56.756 06:32:36 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:56.756 06:32:36 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:56.756 06:32:36 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:56.756 06:32:36 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:56.756 06:32:36 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:07:56.756 06:32:36 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:56.756 06:32:36 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:56.756 06:32:36 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:56.756 06:32:36 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:56.756 06:32:36 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:56.756 06:32:36 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:56.756 06:32:36 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:56.756 06:32:36 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:56.756 06:32:36 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:56.756 06:32:36 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:56.756 06:32:36 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:56.756 06:32:36 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:56.756 06:32:36 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:56.756 06:32:36 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:56.756 06:32:36 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:56.756 06:32:36 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:56.756 06:32:36 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:56.756 06:32:36 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:56.756 06:32:36 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:56.756 06:32:36 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:56.756 06:32:36 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:56.756 06:32:36 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:56.756 06:32:36 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:56.756 06:32:36 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:56.756 06:32:36 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:56.756 06:32:36 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:07:56.756 06:32:36 -- dd/common.sh@149 -- # [[ y != y ]] 00:07:56.756 06:32:36 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:56.756 06:32:36 -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:56.756 06:32:36 -- dd/common.sh@156 -- # liburing_in_use=1 00:07:56.756 06:32:36 -- dd/common.sh@157 -- # return 0 00:07:56.756 06:32:36 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:56.756 06:32:36 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:56.756 06:32:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:56.756 06:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.756 06:32:36 -- common/autotest_common.sh@10 -- # set +x 00:07:56.756 ************************************ 00:07:56.756 START TEST spdk_dd_basic_rw 00:07:56.756 ************************************ 00:07:56.756 06:32:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:56.756 * Looking for test storage... 00:07:56.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:57.017 06:32:36 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.017 06:32:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.017 06:32:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.017 06:32:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.017 06:32:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.017 06:32:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.017 06:32:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.017 06:32:36 -- paths/export.sh@5 -- # export PATH 00:07:57.017 06:32:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.017 06:32:36 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:57.017 06:32:36 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:57.017 06:32:36 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:57.017 06:32:36 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:07:57.017 06:32:36 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:57.017 06:32:36 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:57.017 06:32:36 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:57.017 06:32:36 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.017 06:32:36 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.017 06:32:36 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:07:57.017 06:32:36 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:07:57.017 06:32:36 -- dd/common.sh@126 -- # mapfile -t id 00:07:57.017 06:32:36 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:07:57.018 06:32:36 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 104 Data Units Written: 7 Host Read Commands: 2225 Host Write Commands: 92 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:57.018 06:32:36 -- dd/common.sh@130 -- # lbaf=04 00:07:57.018 06:32:36 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 104 Data Units Written: 7 Host Read Commands: 2225 Host Write Commands: 92 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:57.018 06:32:36 -- dd/common.sh@132 -- # lbaf=4096 00:07:57.018 06:32:36 -- dd/common.sh@134 -- # echo 4096 00:07:57.018 06:32:36 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:57.018 06:32:36 -- dd/basic_rw.sh@96 -- # : 00:07:57.018 06:32:36 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:57.018 06:32:36 -- dd/basic_rw.sh@96 -- # gen_conf 00:07:57.018 06:32:36 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:57.018 06:32:36 -- dd/common.sh@31 -- # xtrace_disable 00:07:57.018 06:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.018 06:32:36 -- common/autotest_common.sh@10 -- # set +x 00:07:57.018 06:32:36 -- common/autotest_common.sh@10 -- # set +x 00:07:57.018 ************************************ 00:07:57.018 START TEST dd_bs_lt_native_bs 00:07:57.018 ************************************ 00:07:57.018 06:32:36 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:57.018 06:32:36 -- common/autotest_common.sh@640 -- # local es=0 00:07:57.018 06:32:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:57.018 06:32:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.018 06:32:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:57.018 06:32:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.018 06:32:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:57.018 06:32:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.018 06:32:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:57.018 06:32:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.018 06:32:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.018 06:32:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:57.018 { 00:07:57.018 "subsystems": [ 00:07:57.018 { 00:07:57.018 "subsystem": "bdev", 00:07:57.018 "config": [ 00:07:57.018 { 00:07:57.018 "params": { 00:07:57.018 "trtype": "pcie", 00:07:57.018 "traddr": "0000:00:06.0", 00:07:57.018 "name": "Nvme0" 00:07:57.018 }, 00:07:57.018 "method": "bdev_nvme_attach_controller" 00:07:57.018 }, 00:07:57.018 { 00:07:57.018 "method": "bdev_wait_for_examine" 00:07:57.018 } 00:07:57.018 ] 00:07:57.018 } 00:07:57.018 ] 00:07:57.018 } 00:07:57.018 [2024-07-12 06:32:36.928147] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:57.018 [2024-07-12 06:32:36.928243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69553 ] 00:07:57.277 [2024-07-12 06:32:37.069835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.277 [2024-07-12 06:32:37.105621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.536 [2024-07-12 06:32:37.213037] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:57.536 [2024-07-12 06:32:37.213115] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.536 [2024-07-12 06:32:37.283139] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:57.536 06:32:37 -- common/autotest_common.sh@643 -- # es=234 00:07:57.536 06:32:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:57.536 06:32:37 -- common/autotest_common.sh@652 -- # es=106 00:07:57.536 06:32:37 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:57.536 06:32:37 -- common/autotest_common.sh@660 -- # es=1 00:07:57.536 06:32:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:57.536 00:07:57.536 real 0m0.473s 00:07:57.536 user 0m0.311s 00:07:57.536 sys 0m0.116s 00:07:57.536 ************************************ 00:07:57.536 END TEST dd_bs_lt_native_bs 00:07:57.536 ************************************ 00:07:57.536 06:32:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.536 06:32:37 -- common/autotest_common.sh@10 -- # set +x 00:07:57.536 06:32:37 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:57.536 06:32:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:57.536 06:32:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.536 06:32:37 -- common/autotest_common.sh@10 -- # set +x 00:07:57.536 ************************************ 00:07:57.536 START TEST dd_rw 00:07:57.536 ************************************ 00:07:57.536 06:32:37 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:07:57.536 06:32:37 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:57.536 06:32:37 -- dd/basic_rw.sh@12 -- # local count size 00:07:57.536 06:32:37 -- dd/basic_rw.sh@13 -- # local qds bss 00:07:57.536 06:32:37 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:57.536 06:32:37 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:57.536 06:32:37 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:57.536 06:32:37 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:57.536 06:32:37 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:57.536 06:32:37 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:57.536 06:32:37 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:57.536 06:32:37 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:57.536 06:32:37 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:57.536 06:32:37 -- dd/basic_rw.sh@23 -- # count=15 00:07:57.536 06:32:37 -- dd/basic_rw.sh@24 -- # count=15 00:07:57.536 06:32:37 -- dd/basic_rw.sh@25 -- # size=61440 00:07:57.536 06:32:37 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:57.536 06:32:37 -- dd/common.sh@98 -- # xtrace_disable 00:07:57.536 06:32:37 -- common/autotest_common.sh@10 -- # set +x 00:07:58.104 06:32:38 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:58.363 06:32:38 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:58.363 06:32:38 -- dd/common.sh@31 -- # xtrace_disable 00:07:58.363 06:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:58.363 [2024-07-12 06:32:38.069712] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:58.363 [2024-07-12 06:32:38.069808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69584 ] 00:07:58.363 { 00:07:58.363 "subsystems": [ 00:07:58.363 { 00:07:58.363 "subsystem": "bdev", 00:07:58.363 "config": [ 00:07:58.363 { 00:07:58.363 "params": { 00:07:58.363 "trtype": "pcie", 00:07:58.363 "traddr": "0000:00:06.0", 00:07:58.363 "name": "Nvme0" 00:07:58.363 }, 00:07:58.363 "method": "bdev_nvme_attach_controller" 00:07:58.363 }, 00:07:58.363 { 00:07:58.363 "method": "bdev_wait_for_examine" 00:07:58.363 } 00:07:58.363 ] 00:07:58.363 } 00:07:58.363 ] 00:07:58.363 } 00:07:58.363 [2024-07-12 06:32:38.211690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.363 [2024-07-12 06:32:38.245611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.621  Copying: 60/60 [kB] (average 29 MBps) 00:07:58.621 00:07:58.621 06:32:38 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:58.621 06:32:38 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:58.621 06:32:38 -- dd/common.sh@31 -- # xtrace_disable 00:07:58.621 06:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:58.878 [2024-07-12 06:32:38.569995] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:58.878 [2024-07-12 06:32:38.570076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69602 ] 00:07:58.878 { 00:07:58.878 "subsystems": [ 00:07:58.878 { 00:07:58.878 "subsystem": "bdev", 00:07:58.878 "config": [ 00:07:58.878 { 00:07:58.878 "params": { 00:07:58.878 "trtype": "pcie", 00:07:58.878 "traddr": "0000:00:06.0", 00:07:58.878 "name": "Nvme0" 00:07:58.878 }, 00:07:58.878 "method": "bdev_nvme_attach_controller" 00:07:58.878 }, 00:07:58.878 { 00:07:58.878 "method": "bdev_wait_for_examine" 00:07:58.878 } 00:07:58.878 ] 00:07:58.878 } 00:07:58.878 ] 00:07:58.878 } 00:07:58.879 [2024-07-12 06:32:38.707072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.879 [2024-07-12 06:32:38.750343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.136  Copying: 60/60 [kB] (average 19 MBps) 00:07:59.136 00:07:59.393 06:32:39 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.393 06:32:39 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:59.393 06:32:39 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:59.393 06:32:39 -- dd/common.sh@11 -- # local nvme_ref= 00:07:59.393 06:32:39 -- dd/common.sh@12 -- # local size=61440 00:07:59.393 06:32:39 -- dd/common.sh@14 -- # local bs=1048576 00:07:59.393 06:32:39 -- dd/common.sh@15 -- # local count=1 00:07:59.393 06:32:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:59.393 06:32:39 -- dd/common.sh@18 -- # gen_conf 00:07:59.393 06:32:39 -- dd/common.sh@31 -- # xtrace_disable 00:07:59.393 06:32:39 -- common/autotest_common.sh@10 -- # set +x 00:07:59.393 [2024-07-12 06:32:39.120646] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:59.394 [2024-07-12 06:32:39.120772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69610 ] 00:07:59.394 { 00:07:59.394 "subsystems": [ 00:07:59.394 { 00:07:59.394 "subsystem": "bdev", 00:07:59.394 "config": [ 00:07:59.394 { 00:07:59.394 "params": { 00:07:59.394 "trtype": "pcie", 00:07:59.394 "traddr": "0000:00:06.0", 00:07:59.394 "name": "Nvme0" 00:07:59.394 }, 00:07:59.394 "method": "bdev_nvme_attach_controller" 00:07:59.394 }, 00:07:59.394 { 00:07:59.394 "method": "bdev_wait_for_examine" 00:07:59.394 } 00:07:59.394 ] 00:07:59.394 } 00:07:59.394 ] 00:07:59.394 } 00:07:59.394 [2024-07-12 06:32:39.261163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.394 [2024-07-12 06:32:39.302541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.909  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:59.909 00:07:59.909 06:32:39 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:59.909 06:32:39 -- dd/basic_rw.sh@23 -- # count=15 00:07:59.909 06:32:39 -- dd/basic_rw.sh@24 -- # count=15 00:07:59.909 06:32:39 -- dd/basic_rw.sh@25 -- # size=61440 00:07:59.909 06:32:39 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:59.909 06:32:39 -- dd/common.sh@98 -- # xtrace_disable 00:07:59.909 06:32:39 -- common/autotest_common.sh@10 -- # set +x 00:08:00.513 06:32:40 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:00.513 06:32:40 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:00.513 06:32:40 -- dd/common.sh@31 -- # xtrace_disable 00:08:00.513 06:32:40 -- common/autotest_common.sh@10 -- # set +x 00:08:00.513 [2024-07-12 06:32:40.275922] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:00.513 [2024-07-12 06:32:40.276029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69628 ] 00:08:00.513 { 00:08:00.513 "subsystems": [ 00:08:00.513 { 00:08:00.513 "subsystem": "bdev", 00:08:00.513 "config": [ 00:08:00.513 { 00:08:00.513 "params": { 00:08:00.513 "trtype": "pcie", 00:08:00.513 "traddr": "0000:00:06.0", 00:08:00.513 "name": "Nvme0" 00:08:00.513 }, 00:08:00.513 "method": "bdev_nvme_attach_controller" 00:08:00.513 }, 00:08:00.513 { 00:08:00.513 "method": "bdev_wait_for_examine" 00:08:00.513 } 00:08:00.513 ] 00:08:00.513 } 00:08:00.513 ] 00:08:00.513 } 00:08:00.513 [2024-07-12 06:32:40.407941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.772 [2024-07-12 06:32:40.451103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.029  Copying: 60/60 [kB] (average 58 MBps) 00:08:01.029 00:08:01.029 06:32:40 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:01.029 06:32:40 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:01.029 06:32:40 -- dd/common.sh@31 -- # xtrace_disable 00:08:01.029 06:32:40 -- common/autotest_common.sh@10 -- # set +x 00:08:01.029 { 00:08:01.029 "subsystems": [ 00:08:01.029 { 00:08:01.029 "subsystem": "bdev", 00:08:01.029 "config": [ 00:08:01.029 { 00:08:01.029 "params": { 00:08:01.029 "trtype": "pcie", 00:08:01.029 "traddr": "0000:00:06.0", 00:08:01.029 "name": "Nvme0" 00:08:01.029 }, 00:08:01.029 "method": "bdev_nvme_attach_controller" 00:08:01.029 }, 00:08:01.029 { 00:08:01.029 "method": "bdev_wait_for_examine" 00:08:01.029 } 00:08:01.029 ] 00:08:01.029 } 00:08:01.029 ] 00:08:01.029 } 00:08:01.029 [2024-07-12 06:32:40.810675] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:01.029 [2024-07-12 06:32:40.810780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69646 ] 00:08:01.029 [2024-07-12 06:32:40.944067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.287 [2024-07-12 06:32:40.978172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.546  Copying: 60/60 [kB] (average 58 MBps) 00:08:01.546 00:08:01.546 06:32:41 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.546 06:32:41 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:01.546 06:32:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:01.546 06:32:41 -- dd/common.sh@11 -- # local nvme_ref= 00:08:01.546 06:32:41 -- dd/common.sh@12 -- # local size=61440 00:08:01.546 06:32:41 -- dd/common.sh@14 -- # local bs=1048576 00:08:01.546 06:32:41 -- dd/common.sh@15 -- # local count=1 00:08:01.546 06:32:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:01.546 06:32:41 -- dd/common.sh@18 -- # gen_conf 00:08:01.546 06:32:41 -- dd/common.sh@31 -- # xtrace_disable 00:08:01.546 06:32:41 -- common/autotest_common.sh@10 -- # set +x 00:08:01.546 [2024-07-12 06:32:41.311850] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:01.546 [2024-07-12 06:32:41.311993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69654 ] 00:08:01.546 { 00:08:01.546 "subsystems": [ 00:08:01.546 { 00:08:01.546 "subsystem": "bdev", 00:08:01.546 "config": [ 00:08:01.546 { 00:08:01.546 "params": { 00:08:01.546 "trtype": "pcie", 00:08:01.546 "traddr": "0000:00:06.0", 00:08:01.546 "name": "Nvme0" 00:08:01.546 }, 00:08:01.546 "method": "bdev_nvme_attach_controller" 00:08:01.546 }, 00:08:01.546 { 00:08:01.546 "method": "bdev_wait_for_examine" 00:08:01.546 } 00:08:01.546 ] 00:08:01.546 } 00:08:01.546 ] 00:08:01.546 } 00:08:01.546 [2024-07-12 06:32:41.455077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.804 [2024-07-12 06:32:41.490660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.063  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:02.063 00:08:02.063 06:32:41 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:02.063 06:32:41 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:02.063 06:32:41 -- dd/basic_rw.sh@23 -- # count=7 00:08:02.063 06:32:41 -- dd/basic_rw.sh@24 -- # count=7 00:08:02.063 06:32:41 -- dd/basic_rw.sh@25 -- # size=57344 00:08:02.063 06:32:41 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:02.063 06:32:41 -- dd/common.sh@98 -- # xtrace_disable 00:08:02.063 06:32:41 -- common/autotest_common.sh@10 -- # set +x 00:08:02.631 06:32:42 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:02.631 06:32:42 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:02.631 06:32:42 -- dd/common.sh@31 -- # xtrace_disable 00:08:02.631 06:32:42 -- common/autotest_common.sh@10 -- # set +x 00:08:02.631 [2024-07-12 06:32:42.373769] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:02.631 [2024-07-12 06:32:42.374371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69672 ] 00:08:02.631 { 00:08:02.631 "subsystems": [ 00:08:02.631 { 00:08:02.631 "subsystem": "bdev", 00:08:02.631 "config": [ 00:08:02.631 { 00:08:02.631 "params": { 00:08:02.631 "trtype": "pcie", 00:08:02.631 "traddr": "0000:00:06.0", 00:08:02.631 "name": "Nvme0" 00:08:02.631 }, 00:08:02.631 "method": "bdev_nvme_attach_controller" 00:08:02.631 }, 00:08:02.631 { 00:08:02.631 "method": "bdev_wait_for_examine" 00:08:02.631 } 00:08:02.631 ] 00:08:02.631 } 00:08:02.631 ] 00:08:02.631 } 00:08:02.631 [2024-07-12 06:32:42.515769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.889 [2024-07-12 06:32:42.553792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.150  Copying: 56/56 [kB] (average 54 MBps) 00:08:03.150 00:08:03.150 06:32:42 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:03.150 06:32:42 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:03.150 06:32:42 -- dd/common.sh@31 -- # xtrace_disable 00:08:03.150 06:32:42 -- common/autotest_common.sh@10 -- # set +x 00:08:03.150 [2024-07-12 06:32:42.867858] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:03.150 [2024-07-12 06:32:42.867951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69690 ] 00:08:03.150 { 00:08:03.150 "subsystems": [ 00:08:03.150 { 00:08:03.150 "subsystem": "bdev", 00:08:03.150 "config": [ 00:08:03.150 { 00:08:03.150 "params": { 00:08:03.150 "trtype": "pcie", 00:08:03.150 "traddr": "0000:00:06.0", 00:08:03.150 "name": "Nvme0" 00:08:03.150 }, 00:08:03.150 "method": "bdev_nvme_attach_controller" 00:08:03.150 }, 00:08:03.150 { 00:08:03.150 "method": "bdev_wait_for_examine" 00:08:03.150 } 00:08:03.150 ] 00:08:03.150 } 00:08:03.150 ] 00:08:03.150 } 00:08:03.150 [2024-07-12 06:32:42.996855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.150 [2024-07-12 06:32:43.032481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.408  Copying: 56/56 [kB] (average 54 MBps) 00:08:03.408 00:08:03.408 06:32:43 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.408 06:32:43 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:03.408 06:32:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:03.408 06:32:43 -- dd/common.sh@11 -- # local nvme_ref= 00:08:03.408 06:32:43 -- dd/common.sh@12 -- # local size=57344 00:08:03.408 06:32:43 -- dd/common.sh@14 -- # local bs=1048576 00:08:03.408 06:32:43 -- dd/common.sh@15 -- # local count=1 00:08:03.408 06:32:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:03.408 06:32:43 -- dd/common.sh@18 -- # gen_conf 00:08:03.408 06:32:43 -- dd/common.sh@31 -- # xtrace_disable 00:08:03.408 06:32:43 -- common/autotest_common.sh@10 -- # set +x 00:08:03.667 [2024-07-12 06:32:43.360905] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:03.667 [2024-07-12 06:32:43.361008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69698 ] 00:08:03.667 { 00:08:03.667 "subsystems": [ 00:08:03.667 { 00:08:03.667 "subsystem": "bdev", 00:08:03.667 "config": [ 00:08:03.667 { 00:08:03.667 "params": { 00:08:03.667 "trtype": "pcie", 00:08:03.667 "traddr": "0000:00:06.0", 00:08:03.667 "name": "Nvme0" 00:08:03.667 }, 00:08:03.667 "method": "bdev_nvme_attach_controller" 00:08:03.667 }, 00:08:03.667 { 00:08:03.667 "method": "bdev_wait_for_examine" 00:08:03.667 } 00:08:03.667 ] 00:08:03.667 } 00:08:03.667 ] 00:08:03.667 } 00:08:03.667 [2024-07-12 06:32:43.494599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.667 [2024-07-12 06:32:43.528741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.925  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:03.925 00:08:03.925 06:32:43 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:03.925 06:32:43 -- dd/basic_rw.sh@23 -- # count=7 00:08:03.925 06:32:43 -- dd/basic_rw.sh@24 -- # count=7 00:08:03.925 06:32:43 -- dd/basic_rw.sh@25 -- # size=57344 00:08:03.925 06:32:43 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:03.925 06:32:43 -- dd/common.sh@98 -- # xtrace_disable 00:08:03.925 06:32:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.492 06:32:44 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:04.492 06:32:44 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:04.492 06:32:44 -- dd/common.sh@31 -- # xtrace_disable 00:08:04.492 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 [2024-07-12 06:32:44.421762] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:04.829 [2024-07-12 06:32:44.422081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69716 ] 00:08:04.829 { 00:08:04.829 "subsystems": [ 00:08:04.829 { 00:08:04.829 "subsystem": "bdev", 00:08:04.829 "config": [ 00:08:04.829 { 00:08:04.829 "params": { 00:08:04.829 "trtype": "pcie", 00:08:04.829 "traddr": "0000:00:06.0", 00:08:04.829 "name": "Nvme0" 00:08:04.829 }, 00:08:04.829 "method": "bdev_nvme_attach_controller" 00:08:04.829 }, 00:08:04.829 { 00:08:04.829 "method": "bdev_wait_for_examine" 00:08:04.829 } 00:08:04.829 ] 00:08:04.829 } 00:08:04.829 ] 00:08:04.829 } 00:08:04.829 [2024-07-12 06:32:44.562212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.829 [2024-07-12 06:32:44.596082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.090  Copying: 56/56 [kB] (average 54 MBps) 00:08:05.090 00:08:05.090 06:32:44 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:05.090 06:32:44 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:05.090 06:32:44 -- dd/common.sh@31 -- # xtrace_disable 00:08:05.090 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.090 [2024-07-12 06:32:44.902143] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:05.090 [2024-07-12 06:32:44.902239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69734 ] 00:08:05.090 { 00:08:05.090 "subsystems": [ 00:08:05.090 { 00:08:05.090 "subsystem": "bdev", 00:08:05.090 "config": [ 00:08:05.090 { 00:08:05.090 "params": { 00:08:05.090 "trtype": "pcie", 00:08:05.090 "traddr": "0000:00:06.0", 00:08:05.090 "name": "Nvme0" 00:08:05.090 }, 00:08:05.090 "method": "bdev_nvme_attach_controller" 00:08:05.090 }, 00:08:05.090 { 00:08:05.090 "method": "bdev_wait_for_examine" 00:08:05.090 } 00:08:05.090 ] 00:08:05.090 } 00:08:05.090 ] 00:08:05.090 } 00:08:05.350 [2024-07-12 06:32:45.039851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.350 [2024-07-12 06:32:45.073485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.609  Copying: 56/56 [kB] (average 54 MBps) 00:08:05.609 00:08:05.609 06:32:45 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.609 06:32:45 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:05.609 06:32:45 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:05.609 06:32:45 -- dd/common.sh@11 -- # local nvme_ref= 00:08:05.609 06:32:45 -- dd/common.sh@12 -- # local size=57344 00:08:05.609 06:32:45 -- dd/common.sh@14 -- # local bs=1048576 00:08:05.609 06:32:45 -- dd/common.sh@15 -- # local count=1 00:08:05.609 06:32:45 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:05.609 06:32:45 -- dd/common.sh@18 -- # gen_conf 00:08:05.609 06:32:45 -- dd/common.sh@31 -- # xtrace_disable 00:08:05.609 06:32:45 -- common/autotest_common.sh@10 -- # set +x 00:08:05.609 [2024-07-12 06:32:45.386790] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:05.609 [2024-07-12 06:32:45.386881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69742 ] 00:08:05.609 { 00:08:05.609 "subsystems": [ 00:08:05.609 { 00:08:05.609 "subsystem": "bdev", 00:08:05.609 "config": [ 00:08:05.609 { 00:08:05.609 "params": { 00:08:05.609 "trtype": "pcie", 00:08:05.609 "traddr": "0000:00:06.0", 00:08:05.609 "name": "Nvme0" 00:08:05.609 }, 00:08:05.609 "method": "bdev_nvme_attach_controller" 00:08:05.609 }, 00:08:05.609 { 00:08:05.609 "method": "bdev_wait_for_examine" 00:08:05.609 } 00:08:05.609 ] 00:08:05.609 } 00:08:05.609 ] 00:08:05.609 } 00:08:05.609 [2024-07-12 06:32:45.524245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.868 [2024-07-12 06:32:45.558298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.126  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:06.126 00:08:06.126 06:32:45 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:06.126 06:32:45 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:06.126 06:32:45 -- dd/basic_rw.sh@23 -- # count=3 00:08:06.126 06:32:45 -- dd/basic_rw.sh@24 -- # count=3 00:08:06.126 06:32:45 -- dd/basic_rw.sh@25 -- # size=49152 00:08:06.126 06:32:45 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:06.126 06:32:45 -- dd/common.sh@98 -- # xtrace_disable 00:08:06.126 06:32:45 -- common/autotest_common.sh@10 -- # set +x 00:08:06.694 06:32:46 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:06.694 06:32:46 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:06.694 06:32:46 -- dd/common.sh@31 -- # xtrace_disable 00:08:06.694 06:32:46 -- common/autotest_common.sh@10 -- # set +x 00:08:06.694 [2024-07-12 06:32:46.408751] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:06.694 [2024-07-12 06:32:46.408858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69760 ] 00:08:06.694 { 00:08:06.694 "subsystems": [ 00:08:06.694 { 00:08:06.694 "subsystem": "bdev", 00:08:06.694 "config": [ 00:08:06.694 { 00:08:06.694 "params": { 00:08:06.694 "trtype": "pcie", 00:08:06.694 "traddr": "0000:00:06.0", 00:08:06.694 "name": "Nvme0" 00:08:06.694 }, 00:08:06.694 "method": "bdev_nvme_attach_controller" 00:08:06.694 }, 00:08:06.694 { 00:08:06.694 "method": "bdev_wait_for_examine" 00:08:06.694 } 00:08:06.694 ] 00:08:06.694 } 00:08:06.694 ] 00:08:06.694 } 00:08:06.694 [2024-07-12 06:32:46.549682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.694 [2024-07-12 06:32:46.585374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.953  Copying: 48/48 [kB] (average 46 MBps) 00:08:06.954 00:08:06.954 06:32:46 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:06.954 06:32:46 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:06.954 06:32:46 -- dd/common.sh@31 -- # xtrace_disable 00:08:06.954 06:32:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.213 [2024-07-12 06:32:46.902235] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:07.213 [2024-07-12 06:32:46.902321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69778 ] 00:08:07.213 { 00:08:07.213 "subsystems": [ 00:08:07.213 { 00:08:07.213 "subsystem": "bdev", 00:08:07.213 "config": [ 00:08:07.213 { 00:08:07.213 "params": { 00:08:07.213 "trtype": "pcie", 00:08:07.213 "traddr": "0000:00:06.0", 00:08:07.213 "name": "Nvme0" 00:08:07.213 }, 00:08:07.213 "method": "bdev_nvme_attach_controller" 00:08:07.213 }, 00:08:07.213 { 00:08:07.213 "method": "bdev_wait_for_examine" 00:08:07.213 } 00:08:07.213 ] 00:08:07.213 } 00:08:07.213 ] 00:08:07.213 } 00:08:07.213 [2024-07-12 06:32:47.035450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.213 [2024-07-12 06:32:47.070454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.471  Copying: 48/48 [kB] (average 46 MBps) 00:08:07.471 00:08:07.471 06:32:47 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.471 06:32:47 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:07.471 06:32:47 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:07.471 06:32:47 -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.471 06:32:47 -- dd/common.sh@12 -- # local size=49152 00:08:07.471 06:32:47 -- dd/common.sh@14 -- # local bs=1048576 00:08:07.471 06:32:47 -- dd/common.sh@15 -- # local count=1 00:08:07.471 06:32:47 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:07.471 06:32:47 -- dd/common.sh@18 -- # gen_conf 00:08:07.471 06:32:47 -- dd/common.sh@31 -- # xtrace_disable 00:08:07.471 06:32:47 -- common/autotest_common.sh@10 -- # set +x 00:08:07.729 { 00:08:07.729 "subsystems": [ 00:08:07.729 { 00:08:07.729 "subsystem": "bdev", 00:08:07.729 "config": [ 00:08:07.729 { 00:08:07.729 "params": { 00:08:07.729 "trtype": "pcie", 00:08:07.729 "traddr": "0000:00:06.0", 00:08:07.729 "name": "Nvme0" 00:08:07.729 }, 00:08:07.729 "method": "bdev_nvme_attach_controller" 00:08:07.729 }, 00:08:07.729 { 00:08:07.729 "method": "bdev_wait_for_examine" 00:08:07.729 } 00:08:07.729 ] 00:08:07.729 } 00:08:07.729 ] 00:08:07.729 } 00:08:07.729 [2024-07-12 06:32:47.409511] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:07.729 [2024-07-12 06:32:47.409636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69786 ] 00:08:07.729 [2024-07-12 06:32:47.548886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.729 [2024-07-12 06:32:47.583555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.988  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:07.988 00:08:07.988 06:32:47 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:07.988 06:32:47 -- dd/basic_rw.sh@23 -- # count=3 00:08:07.988 06:32:47 -- dd/basic_rw.sh@24 -- # count=3 00:08:07.988 06:32:47 -- dd/basic_rw.sh@25 -- # size=49152 00:08:07.988 06:32:47 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:07.988 06:32:47 -- dd/common.sh@98 -- # xtrace_disable 00:08:07.988 06:32:47 -- common/autotest_common.sh@10 -- # set +x 00:08:08.555 06:32:48 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:08.555 06:32:48 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:08.555 06:32:48 -- dd/common.sh@31 -- # xtrace_disable 00:08:08.555 06:32:48 -- common/autotest_common.sh@10 -- # set +x 00:08:08.555 { 00:08:08.555 "subsystems": [ 00:08:08.555 { 00:08:08.555 "subsystem": "bdev", 00:08:08.555 "config": [ 00:08:08.555 { 00:08:08.555 "params": { 00:08:08.555 "trtype": "pcie", 00:08:08.555 "traddr": "0000:00:06.0", 00:08:08.555 "name": "Nvme0" 00:08:08.555 }, 00:08:08.555 "method": "bdev_nvme_attach_controller" 00:08:08.555 }, 00:08:08.555 { 00:08:08.555 "method": "bdev_wait_for_examine" 00:08:08.555 } 00:08:08.555 ] 00:08:08.555 } 00:08:08.555 ] 00:08:08.555 } 00:08:08.555 [2024-07-12 06:32:48.430932] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:08.555 [2024-07-12 06:32:48.431033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69804 ] 00:08:08.826 [2024-07-12 06:32:48.561329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.826 [2024-07-12 06:32:48.599722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.090  Copying: 48/48 [kB] (average 46 MBps) 00:08:09.090 00:08:09.090 06:32:48 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:09.090 06:32:48 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:09.090 06:32:48 -- dd/common.sh@31 -- # xtrace_disable 00:08:09.090 06:32:48 -- common/autotest_common.sh@10 -- # set +x 00:08:09.090 [2024-07-12 06:32:48.957385] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:09.090 [2024-07-12 06:32:48.957523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69821 ] 00:08:09.090 { 00:08:09.090 "subsystems": [ 00:08:09.090 { 00:08:09.090 "subsystem": "bdev", 00:08:09.090 "config": [ 00:08:09.090 { 00:08:09.090 "params": { 00:08:09.090 "trtype": "pcie", 00:08:09.090 "traddr": "0000:00:06.0", 00:08:09.090 "name": "Nvme0" 00:08:09.090 }, 00:08:09.090 "method": "bdev_nvme_attach_controller" 00:08:09.090 }, 00:08:09.090 { 00:08:09.090 "method": "bdev_wait_for_examine" 00:08:09.090 } 00:08:09.090 ] 00:08:09.090 } 00:08:09.090 ] 00:08:09.090 } 00:08:09.349 [2024-07-12 06:32:49.101390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.349 [2024-07-12 06:32:49.137585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.609  Copying: 48/48 [kB] (average 46 MBps) 00:08:09.609 00:08:09.609 06:32:49 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.609 06:32:49 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:09.609 06:32:49 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:09.609 06:32:49 -- dd/common.sh@11 -- # local nvme_ref= 00:08:09.609 06:32:49 -- dd/common.sh@12 -- # local size=49152 00:08:09.609 06:32:49 -- dd/common.sh@14 -- # local bs=1048576 00:08:09.609 06:32:49 -- dd/common.sh@15 -- # local count=1 00:08:09.609 06:32:49 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:09.609 06:32:49 -- dd/common.sh@18 -- # gen_conf 00:08:09.609 06:32:49 -- dd/common.sh@31 -- # xtrace_disable 00:08:09.609 06:32:49 -- common/autotest_common.sh@10 -- # set +x 00:08:09.609 { 00:08:09.609 "subsystems": [ 00:08:09.609 { 00:08:09.609 "subsystem": "bdev", 00:08:09.609 "config": [ 00:08:09.609 { 00:08:09.609 "params": { 00:08:09.609 "trtype": "pcie", 00:08:09.609 "traddr": "0000:00:06.0", 00:08:09.609 "name": "Nvme0" 00:08:09.609 }, 00:08:09.609 "method": "bdev_nvme_attach_controller" 00:08:09.609 }, 00:08:09.609 { 00:08:09.609 "method": "bdev_wait_for_examine" 00:08:09.609 } 00:08:09.609 ] 00:08:09.609 } 00:08:09.609 ] 00:08:09.609 } 00:08:09.609 [2024-07-12 06:32:49.467530] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:09.609 [2024-07-12 06:32:49.467648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69830 ] 00:08:09.869 [2024-07-12 06:32:49.612742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.869 [2024-07-12 06:32:49.648737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.129  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:10.129 00:08:10.129 00:08:10.129 real 0m12.543s 00:08:10.129 user 0m9.188s 00:08:10.129 sys 0m2.193s 00:08:10.129 06:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.129 06:32:49 -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 ************************************ 00:08:10.129 END TEST dd_rw 00:08:10.129 ************************************ 00:08:10.129 06:32:49 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:10.129 06:32:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:10.129 06:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.129 06:32:49 -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 ************************************ 00:08:10.129 START TEST dd_rw_offset 00:08:10.129 ************************************ 00:08:10.129 06:32:49 -- common/autotest_common.sh@1104 -- # basic_offset 00:08:10.129 06:32:49 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:10.129 06:32:49 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:10.129 06:32:49 -- dd/common.sh@98 -- # xtrace_disable 00:08:10.129 06:32:49 -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 06:32:50 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:10.129 06:32:50 -- dd/basic_rw.sh@56 -- # data=cxjfklgkmnmdbt7aga1davpm9cz62nlydydi51sfkb2rsbk38mh6xgnfwtdsdm2xsow89fjezr4f2c7kyn3puhm4jcdticplrjisii4l718m0lu60uu5jvqqq4zwgyyf7411zoiuaope5srl9jldqshvzvdkyd00b9hvl6vg4cacf1w8zw2watdmgby8u35kqu720ew9h5gmm6wflicq2qak5l6lbn19uk8foo8pptco30yvd2jonb33decuzv2ggt26hle46zta6cmdwwmy9euug0i0q6mrxz0eqw0osaefo7h197ovux36xxwnrcjxbn4w17ygpj6eypbq46zzaq09enogr2a6siq2jv31qfss0oj89zntjngkncl7tu35k3nmcjz2jrklrmiorh4qhpcaa5inz44p8grna27fe3x18ot8noaizpdydj34x4z1rdjl68t2phhqh7mojmz782qg211wqaajjewcc585n6830ulh744iivmmvux8zo5l76gh83k998kjxqwwbtmg362tn86nfttpe1a3s47aur4o7k7jx5h9fa3wwbcz7dz7t9daa6fy9ur3es9jdpxqut9uwvim0cfvsn85rbrrks4zwfvum0y1j18apeedf69ggaekz8l8lorqjpwhkkeu8e3hf8od5tnkv2rqbh0zyd9pw1k82lauc4i3uv8plk8macqqdgoozy4jzaw6ijbbwwzc71st1f6wzpt4hwlkcdejvmncwiziz54xecwvbwe840ggut30vu9eh7q14u76ulwmq5n8ao6smdv2j4ysbr7nrmfgncb2rjrggghdep37v97yt4glqd2s2i45y0o7wiywzjcmsyk0oue3a905g5z7c6her698usc0d1tmztxgt1dq7i3dxxyf6g2jgfhl5bofqx1bdmnqzz8tovok12pa79wzo4j04618lvnn2x0m5oah3rogwit637l3x033pwjexmz9b5vpvu0rwap4efmz70khvdbp88aq9xmzcnpom61r694wrq06cjdn6r14u99jw26hvw37r1z1uensbubhtlr8i2t2fwaygav51s8r85jce47i07xki7bz8omove8opj23bo90d8lisvfk91db6f70gviq3j7bwzfi7tp1mtsvw5nzetoalel5exp7rw7irkg8w3ps9fmyl9ixdwq53mo6vnnj454se5qnyhhwz6l3ri8l7ik5bplnbe7jzmuzerssnmqmto1xe9qy7jeusvy2ci2snupxouyut8gucz5bz4vi1a5qmetsrk8j9hdgui93ks1ikwgn1v33zqyj4c68h1xob2703om2d4kad8c6miyf2y8md76leu5sg2xe32d6td146sw7wf2gr5ztqhwi0pck0nk7tctijh65qtzwkcjdumofy7zjlvxcf6frk0j7fhdaygx2loynnnjb0tp567bjat6a48guecczt5g3cqdnf6uuvxglff3g43aun5x3q85dxu6neqr2q6x8oy6o0uct1n9yl46zopdjt6fq6ff9tms8y8kjr23i3mcls5ziqactya10qev6fgwnvfiyy369fejwumu0ebyr379qom8fv5zjyn1uw8twb8bnbzm0s06a4hkptrwkjv4m4v88cdt70dbhsydr9h3w8hbpk6kj4suy5ty0jg0qmhu8kawa1s8wurxjhbiidx0rroyyk02s9y3mtaqc8ai0sro7ry3t7blm6ujpptvx2ereeovkgzlhnzavphk83w1cxsaqjdwzenn8j4wvxaerov2tfixp383ilzc0ct063zjyzj910skitr646686asdbmo72tsl7wdsgk2wjsbuk6w5w9882abagiwrwx78nydxszyaxnk3pshd007bc01emssshj3bi74yicv89q7f3bre0q0n67unmu28iqriy1arq3lsldkeat3hi37ess24zp2epfkcbsien1iv0irvx4yjzvy0snkdgj1y49xt84jbr0hiujx8cvsbdvrhire1zbhbjvabhgqb0slgp61dy13uxdjz8j183l5ubxlg3bghhnq2z7rvpnjtar5u3o88cwajqvb9wy2ih1iu0ln75ym7rnl8j4vizangg3som14rn46htjt3uuu4woy4yq0d29nw2llg81c1mhx73pjbi4zwdehn2zhne5rw92i15bxsfguoewadztd4q080gbousp02rongxgnvd2rvau9xdhhcxl2vdnpts9iwyfmjw2che53ila8pighabz6ufkuxg2lhuch2nogwrul6uvsf16xki0j3pwzxva3zk39npfnrfgtuoj2rnuzgko9akh94nbrpdniyoiio1v0rj4gyl4ov02x8glxsl713r66es7o3elgsb9otit32aoiwntk2hpywem45hir92b850fqmpshyqypu3pafx9yjkszjbsi6ku495hmjl8q10cl7itgabnv5b33p5wg07o344iv0rmxjfcfug2m98bgkyhb76p1gt23cpktrpm518pch57lsbm9ginpu4gysup8fh4b4kthpyed096z9hs9lx1rckprskasqzksfqmfrkkf2y7sfkr1sqivp26yagbpoaq6jx30vdy493wgk8q9uwyyiao24ukut04bbetbk35mhwajdm2ox74pgnteff1q8og9oc7tomjpstjnvuax83tcvr83jfb9uaju6cu2x5qaxjxjut444boshvp7c91jy450i64tnhpmzs4hkxfxrg0hb8lg08v8lgpr0vemzven0x5h3l47lnmwwjujr7j3cxh2wty0pw7l3kffyzem69mdz1cch8wcrqmscx3quxvu7ue2qq1eafjkl0ixsqbebfjka4o142mql816xlnzs8asy6eyes84wlurehxvc8mat67e5mbkhb4o4go73aczsqo5m09qcj2m22kzjvjg6h0fqqnh7tofg9z5mdqea7oyqlkp8hd7fwqacndicmf4bpmjt72r4i4zl4ccctmrgcchai2etu2xalcdg7ofst5l7sit8y3m4t9s76ycp0lyd0zre7sm19xuzigrqg0jlygp1ks63zqpryhbblu3j899a3jpg79e7ylqlbg0in0s3z4mj885mzhlylsytbwxa884btszxsejdathgeuo1p29z5war56bu23q33p9auergdsgzb535tj08hfl8owuk7v1zec5pt0c5aka39hx0o96ic9r6svo8dnwzvems3xwi7sifudpfdt28opkvsezb91oododi3l91oesulbkwiawh4i6raytlyilizy6pug6ooqczcbqqg294efxa2yow04zyv3k958ce4wt21eq5f636p6ywxv5eg6r8y9bg5rlfgpy6d0n43qa05k37klajb1ktfrgjbv20nhk9j2619alfsmu7gxq3e57a3pqfvva1q2tzg356bftv5wubpu8z3x8kbmd8y1gpaoc1l24g1dc42nc5c5m752pmr58ridxmc313hpimzx0oz2i5bw0wb0mx3q83ias78utlgmqwzbuhpnl6x57hk2tsvfmgl40g5qe2990gg0z5dskf92g5glk9l9cke5amnd6v78yiis1uza4mfx2wlyllnpjkecjtmsf2rru5bs5yky1ehdjxsk701xsckq40pmewm4cqm4txc86kqwclzaw8kcicxlch1swwq0jc1k3jltwbwmw6yp7lkd6fa7a04vrt6mh1bvsnl423shnq10q5yf1owzlm4jbfakojq6lcd6jvbef0weg7mwo6j5g6vlw9upy7q7s1bssp8uc9dz5kg37noqvqowd3h5h9r3arh82y8fty1oloaogc22ahlveqaow5hm3ixw6zxx6obtyky0mrup3akqyivt56d34bmzowo0kqfwojqe4qtqd3avmx0eagqd6317299pv1sshzqa5ygb35ds4wutmyue96ee9zttak6d929irsk6xb4taz52706bn014znhkkfkxo543c99h78ygij06q61biocnd5mdiz8ucjs4t4ydtyzsbj6n9x5t5lwj4j5ywoh65ykniu454t5kwqd 00:08:10.129 06:32:50 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:10.129 06:32:50 -- dd/basic_rw.sh@59 -- # gen_conf 00:08:10.129 06:32:50 -- dd/common.sh@31 -- # xtrace_disable 00:08:10.129 06:32:50 -- common/autotest_common.sh@10 -- # set +x 00:08:10.388 { 00:08:10.388 "subsystems": [ 00:08:10.388 { 00:08:10.388 "subsystem": "bdev", 00:08:10.388 "config": [ 00:08:10.388 { 00:08:10.388 "params": { 00:08:10.388 "trtype": "pcie", 00:08:10.388 "traddr": "0000:00:06.0", 00:08:10.388 "name": "Nvme0" 00:08:10.388 }, 00:08:10.388 "method": "bdev_nvme_attach_controller" 00:08:10.388 }, 00:08:10.388 { 00:08:10.388 "method": "bdev_wait_for_examine" 00:08:10.388 } 00:08:10.388 ] 00:08:10.388 } 00:08:10.388 ] 00:08:10.388 } 00:08:10.388 [2024-07-12 06:32:50.076577] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:10.388 [2024-07-12 06:32:50.076673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69865 ] 00:08:10.388 [2024-07-12 06:32:50.216645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.388 [2024-07-12 06:32:50.251179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.646  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:10.646 00:08:10.646 06:32:50 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:10.646 06:32:50 -- dd/basic_rw.sh@65 -- # gen_conf 00:08:10.646 06:32:50 -- dd/common.sh@31 -- # xtrace_disable 00:08:10.646 06:32:50 -- common/autotest_common.sh@10 -- # set +x 00:08:10.904 [2024-07-12 06:32:50.569682] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:10.904 [2024-07-12 06:32:50.569808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69872 ] 00:08:10.904 { 00:08:10.904 "subsystems": [ 00:08:10.904 { 00:08:10.904 "subsystem": "bdev", 00:08:10.904 "config": [ 00:08:10.904 { 00:08:10.904 "params": { 00:08:10.904 "trtype": "pcie", 00:08:10.904 "traddr": "0000:00:06.0", 00:08:10.904 "name": "Nvme0" 00:08:10.904 }, 00:08:10.904 "method": "bdev_nvme_attach_controller" 00:08:10.904 }, 00:08:10.904 { 00:08:10.904 "method": "bdev_wait_for_examine" 00:08:10.904 } 00:08:10.904 ] 00:08:10.904 } 00:08:10.904 ] 00:08:10.904 } 00:08:10.904 [2024-07-12 06:32:50.711642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.904 [2024-07-12 06:32:50.745814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.163  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:11.163 00:08:11.163 06:32:51 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:11.164 06:32:51 -- dd/basic_rw.sh@72 -- # [[ cxjfklgkmnmdbt7aga1davpm9cz62nlydydi51sfkb2rsbk38mh6xgnfwtdsdm2xsow89fjezr4f2c7kyn3puhm4jcdticplrjisii4l718m0lu60uu5jvqqq4zwgyyf7411zoiuaope5srl9jldqshvzvdkyd00b9hvl6vg4cacf1w8zw2watdmgby8u35kqu720ew9h5gmm6wflicq2qak5l6lbn19uk8foo8pptco30yvd2jonb33decuzv2ggt26hle46zta6cmdwwmy9euug0i0q6mrxz0eqw0osaefo7h197ovux36xxwnrcjxbn4w17ygpj6eypbq46zzaq09enogr2a6siq2jv31qfss0oj89zntjngkncl7tu35k3nmcjz2jrklrmiorh4qhpcaa5inz44p8grna27fe3x18ot8noaizpdydj34x4z1rdjl68t2phhqh7mojmz782qg211wqaajjewcc585n6830ulh744iivmmvux8zo5l76gh83k998kjxqwwbtmg362tn86nfttpe1a3s47aur4o7k7jx5h9fa3wwbcz7dz7t9daa6fy9ur3es9jdpxqut9uwvim0cfvsn85rbrrks4zwfvum0y1j18apeedf69ggaekz8l8lorqjpwhkkeu8e3hf8od5tnkv2rqbh0zyd9pw1k82lauc4i3uv8plk8macqqdgoozy4jzaw6ijbbwwzc71st1f6wzpt4hwlkcdejvmncwiziz54xecwvbwe840ggut30vu9eh7q14u76ulwmq5n8ao6smdv2j4ysbr7nrmfgncb2rjrggghdep37v97yt4glqd2s2i45y0o7wiywzjcmsyk0oue3a905g5z7c6her698usc0d1tmztxgt1dq7i3dxxyf6g2jgfhl5bofqx1bdmnqzz8tovok12pa79wzo4j04618lvnn2x0m5oah3rogwit637l3x033pwjexmz9b5vpvu0rwap4efmz70khvdbp88aq9xmzcnpom61r694wrq06cjdn6r14u99jw26hvw37r1z1uensbubhtlr8i2t2fwaygav51s8r85jce47i07xki7bz8omove8opj23bo90d8lisvfk91db6f70gviq3j7bwzfi7tp1mtsvw5nzetoalel5exp7rw7irkg8w3ps9fmyl9ixdwq53mo6vnnj454se5qnyhhwz6l3ri8l7ik5bplnbe7jzmuzerssnmqmto1xe9qy7jeusvy2ci2snupxouyut8gucz5bz4vi1a5qmetsrk8j9hdgui93ks1ikwgn1v33zqyj4c68h1xob2703om2d4kad8c6miyf2y8md76leu5sg2xe32d6td146sw7wf2gr5ztqhwi0pck0nk7tctijh65qtzwkcjdumofy7zjlvxcf6frk0j7fhdaygx2loynnnjb0tp567bjat6a48guecczt5g3cqdnf6uuvxglff3g43aun5x3q85dxu6neqr2q6x8oy6o0uct1n9yl46zopdjt6fq6ff9tms8y8kjr23i3mcls5ziqactya10qev6fgwnvfiyy369fejwumu0ebyr379qom8fv5zjyn1uw8twb8bnbzm0s06a4hkptrwkjv4m4v88cdt70dbhsydr9h3w8hbpk6kj4suy5ty0jg0qmhu8kawa1s8wurxjhbiidx0rroyyk02s9y3mtaqc8ai0sro7ry3t7blm6ujpptvx2ereeovkgzlhnzavphk83w1cxsaqjdwzenn8j4wvxaerov2tfixp383ilzc0ct063zjyzj910skitr646686asdbmo72tsl7wdsgk2wjsbuk6w5w9882abagiwrwx78nydxszyaxnk3pshd007bc01emssshj3bi74yicv89q7f3bre0q0n67unmu28iqriy1arq3lsldkeat3hi37ess24zp2epfkcbsien1iv0irvx4yjzvy0snkdgj1y49xt84jbr0hiujx8cvsbdvrhire1zbhbjvabhgqb0slgp61dy13uxdjz8j183l5ubxlg3bghhnq2z7rvpnjtar5u3o88cwajqvb9wy2ih1iu0ln75ym7rnl8j4vizangg3som14rn46htjt3uuu4woy4yq0d29nw2llg81c1mhx73pjbi4zwdehn2zhne5rw92i15bxsfguoewadztd4q080gbousp02rongxgnvd2rvau9xdhhcxl2vdnpts9iwyfmjw2che53ila8pighabz6ufkuxg2lhuch2nogwrul6uvsf16xki0j3pwzxva3zk39npfnrfgtuoj2rnuzgko9akh94nbrpdniyoiio1v0rj4gyl4ov02x8glxsl713r66es7o3elgsb9otit32aoiwntk2hpywem45hir92b850fqmpshyqypu3pafx9yjkszjbsi6ku495hmjl8q10cl7itgabnv5b33p5wg07o344iv0rmxjfcfug2m98bgkyhb76p1gt23cpktrpm518pch57lsbm9ginpu4gysup8fh4b4kthpyed096z9hs9lx1rckprskasqzksfqmfrkkf2y7sfkr1sqivp26yagbpoaq6jx30vdy493wgk8q9uwyyiao24ukut04bbetbk35mhwajdm2ox74pgnteff1q8og9oc7tomjpstjnvuax83tcvr83jfb9uaju6cu2x5qaxjxjut444boshvp7c91jy450i64tnhpmzs4hkxfxrg0hb8lg08v8lgpr0vemzven0x5h3l47lnmwwjujr7j3cxh2wty0pw7l3kffyzem69mdz1cch8wcrqmscx3quxvu7ue2qq1eafjkl0ixsqbebfjka4o142mql816xlnzs8asy6eyes84wlurehxvc8mat67e5mbkhb4o4go73aczsqo5m09qcj2m22kzjvjg6h0fqqnh7tofg9z5mdqea7oyqlkp8hd7fwqacndicmf4bpmjt72r4i4zl4ccctmrgcchai2etu2xalcdg7ofst5l7sit8y3m4t9s76ycp0lyd0zre7sm19xuzigrqg0jlygp1ks63zqpryhbblu3j899a3jpg79e7ylqlbg0in0s3z4mj885mzhlylsytbwxa884btszxsejdathgeuo1p29z5war56bu23q33p9auergdsgzb535tj08hfl8owuk7v1zec5pt0c5aka39hx0o96ic9r6svo8dnwzvems3xwi7sifudpfdt28opkvsezb91oododi3l91oesulbkwiawh4i6raytlyilizy6pug6ooqczcbqqg294efxa2yow04zyv3k958ce4wt21eq5f636p6ywxv5eg6r8y9bg5rlfgpy6d0n43qa05k37klajb1ktfrgjbv20nhk9j2619alfsmu7gxq3e57a3pqfvva1q2tzg356bftv5wubpu8z3x8kbmd8y1gpaoc1l24g1dc42nc5c5m752pmr58ridxmc313hpimzx0oz2i5bw0wb0mx3q83ias78utlgmqwzbuhpnl6x57hk2tsvfmgl40g5qe2990gg0z5dskf92g5glk9l9cke5amnd6v78yiis1uza4mfx2wlyllnpjkecjtmsf2rru5bs5yky1ehdjxsk701xsckq40pmewm4cqm4txc86kqwclzaw8kcicxlch1swwq0jc1k3jltwbwmw6yp7lkd6fa7a04vrt6mh1bvsnl423shnq10q5yf1owzlm4jbfakojq6lcd6jvbef0weg7mwo6j5g6vlw9upy7q7s1bssp8uc9dz5kg37noqvqowd3h5h9r3arh82y8fty1oloaogc22ahlveqaow5hm3ixw6zxx6obtyky0mrup3akqyivt56d34bmzowo0kqfwojqe4qtqd3avmx0eagqd6317299pv1sshzqa5ygb35ds4wutmyue96ee9zttak6d929irsk6xb4taz52706bn014znhkkfkxo543c99h78ygij06q61biocnd5mdiz8ucjs4t4ydtyzsbj6n9x5t5lwj4j5ywoh65ykniu454t5kwqd == \c\x\j\f\k\l\g\k\m\n\m\d\b\t\7\a\g\a\1\d\a\v\p\m\9\c\z\6\2\n\l\y\d\y\d\i\5\1\s\f\k\b\2\r\s\b\k\3\8\m\h\6\x\g\n\f\w\t\d\s\d\m\2\x\s\o\w\8\9\f\j\e\z\r\4\f\2\c\7\k\y\n\3\p\u\h\m\4\j\c\d\t\i\c\p\l\r\j\i\s\i\i\4\l\7\1\8\m\0\l\u\6\0\u\u\5\j\v\q\q\q\4\z\w\g\y\y\f\7\4\1\1\z\o\i\u\a\o\p\e\5\s\r\l\9\j\l\d\q\s\h\v\z\v\d\k\y\d\0\0\b\9\h\v\l\6\v\g\4\c\a\c\f\1\w\8\z\w\2\w\a\t\d\m\g\b\y\8\u\3\5\k\q\u\7\2\0\e\w\9\h\5\g\m\m\6\w\f\l\i\c\q\2\q\a\k\5\l\6\l\b\n\1\9\u\k\8\f\o\o\8\p\p\t\c\o\3\0\y\v\d\2\j\o\n\b\3\3\d\e\c\u\z\v\2\g\g\t\2\6\h\l\e\4\6\z\t\a\6\c\m\d\w\w\m\y\9\e\u\u\g\0\i\0\q\6\m\r\x\z\0\e\q\w\0\o\s\a\e\f\o\7\h\1\9\7\o\v\u\x\3\6\x\x\w\n\r\c\j\x\b\n\4\w\1\7\y\g\p\j\6\e\y\p\b\q\4\6\z\z\a\q\0\9\e\n\o\g\r\2\a\6\s\i\q\2\j\v\3\1\q\f\s\s\0\o\j\8\9\z\n\t\j\n\g\k\n\c\l\7\t\u\3\5\k\3\n\m\c\j\z\2\j\r\k\l\r\m\i\o\r\h\4\q\h\p\c\a\a\5\i\n\z\4\4\p\8\g\r\n\a\2\7\f\e\3\x\1\8\o\t\8\n\o\a\i\z\p\d\y\d\j\3\4\x\4\z\1\r\d\j\l\6\8\t\2\p\h\h\q\h\7\m\o\j\m\z\7\8\2\q\g\2\1\1\w\q\a\a\j\j\e\w\c\c\5\8\5\n\6\8\3\0\u\l\h\7\4\4\i\i\v\m\m\v\u\x\8\z\o\5\l\7\6\g\h\8\3\k\9\9\8\k\j\x\q\w\w\b\t\m\g\3\6\2\t\n\8\6\n\f\t\t\p\e\1\a\3\s\4\7\a\u\r\4\o\7\k\7\j\x\5\h\9\f\a\3\w\w\b\c\z\7\d\z\7\t\9\d\a\a\6\f\y\9\u\r\3\e\s\9\j\d\p\x\q\u\t\9\u\w\v\i\m\0\c\f\v\s\n\8\5\r\b\r\r\k\s\4\z\w\f\v\u\m\0\y\1\j\1\8\a\p\e\e\d\f\6\9\g\g\a\e\k\z\8\l\8\l\o\r\q\j\p\w\h\k\k\e\u\8\e\3\h\f\8\o\d\5\t\n\k\v\2\r\q\b\h\0\z\y\d\9\p\w\1\k\8\2\l\a\u\c\4\i\3\u\v\8\p\l\k\8\m\a\c\q\q\d\g\o\o\z\y\4\j\z\a\w\6\i\j\b\b\w\w\z\c\7\1\s\t\1\f\6\w\z\p\t\4\h\w\l\k\c\d\e\j\v\m\n\c\w\i\z\i\z\5\4\x\e\c\w\v\b\w\e\8\4\0\g\g\u\t\3\0\v\u\9\e\h\7\q\1\4\u\7\6\u\l\w\m\q\5\n\8\a\o\6\s\m\d\v\2\j\4\y\s\b\r\7\n\r\m\f\g\n\c\b\2\r\j\r\g\g\g\h\d\e\p\3\7\v\9\7\y\t\4\g\l\q\d\2\s\2\i\4\5\y\0\o\7\w\i\y\w\z\j\c\m\s\y\k\0\o\u\e\3\a\9\0\5\g\5\z\7\c\6\h\e\r\6\9\8\u\s\c\0\d\1\t\m\z\t\x\g\t\1\d\q\7\i\3\d\x\x\y\f\6\g\2\j\g\f\h\l\5\b\o\f\q\x\1\b\d\m\n\q\z\z\8\t\o\v\o\k\1\2\p\a\7\9\w\z\o\4\j\0\4\6\1\8\l\v\n\n\2\x\0\m\5\o\a\h\3\r\o\g\w\i\t\6\3\7\l\3\x\0\3\3\p\w\j\e\x\m\z\9\b\5\v\p\v\u\0\r\w\a\p\4\e\f\m\z\7\0\k\h\v\d\b\p\8\8\a\q\9\x\m\z\c\n\p\o\m\6\1\r\6\9\4\w\r\q\0\6\c\j\d\n\6\r\1\4\u\9\9\j\w\2\6\h\v\w\3\7\r\1\z\1\u\e\n\s\b\u\b\h\t\l\r\8\i\2\t\2\f\w\a\y\g\a\v\5\1\s\8\r\8\5\j\c\e\4\7\i\0\7\x\k\i\7\b\z\8\o\m\o\v\e\8\o\p\j\2\3\b\o\9\0\d\8\l\i\s\v\f\k\9\1\d\b\6\f\7\0\g\v\i\q\3\j\7\b\w\z\f\i\7\t\p\1\m\t\s\v\w\5\n\z\e\t\o\a\l\e\l\5\e\x\p\7\r\w\7\i\r\k\g\8\w\3\p\s\9\f\m\y\l\9\i\x\d\w\q\5\3\m\o\6\v\n\n\j\4\5\4\s\e\5\q\n\y\h\h\w\z\6\l\3\r\i\8\l\7\i\k\5\b\p\l\n\b\e\7\j\z\m\u\z\e\r\s\s\n\m\q\m\t\o\1\x\e\9\q\y\7\j\e\u\s\v\y\2\c\i\2\s\n\u\p\x\o\u\y\u\t\8\g\u\c\z\5\b\z\4\v\i\1\a\5\q\m\e\t\s\r\k\8\j\9\h\d\g\u\i\9\3\k\s\1\i\k\w\g\n\1\v\3\3\z\q\y\j\4\c\6\8\h\1\x\o\b\2\7\0\3\o\m\2\d\4\k\a\d\8\c\6\m\i\y\f\2\y\8\m\d\7\6\l\e\u\5\s\g\2\x\e\3\2\d\6\t\d\1\4\6\s\w\7\w\f\2\g\r\5\z\t\q\h\w\i\0\p\c\k\0\n\k\7\t\c\t\i\j\h\6\5\q\t\z\w\k\c\j\d\u\m\o\f\y\7\z\j\l\v\x\c\f\6\f\r\k\0\j\7\f\h\d\a\y\g\x\2\l\o\y\n\n\n\j\b\0\t\p\5\6\7\b\j\a\t\6\a\4\8\g\u\e\c\c\z\t\5\g\3\c\q\d\n\f\6\u\u\v\x\g\l\f\f\3\g\4\3\a\u\n\5\x\3\q\8\5\d\x\u\6\n\e\q\r\2\q\6\x\8\o\y\6\o\0\u\c\t\1\n\9\y\l\4\6\z\o\p\d\j\t\6\f\q\6\f\f\9\t\m\s\8\y\8\k\j\r\2\3\i\3\m\c\l\s\5\z\i\q\a\c\t\y\a\1\0\q\e\v\6\f\g\w\n\v\f\i\y\y\3\6\9\f\e\j\w\u\m\u\0\e\b\y\r\3\7\9\q\o\m\8\f\v\5\z\j\y\n\1\u\w\8\t\w\b\8\b\n\b\z\m\0\s\0\6\a\4\h\k\p\t\r\w\k\j\v\4\m\4\v\8\8\c\d\t\7\0\d\b\h\s\y\d\r\9\h\3\w\8\h\b\p\k\6\k\j\4\s\u\y\5\t\y\0\j\g\0\q\m\h\u\8\k\a\w\a\1\s\8\w\u\r\x\j\h\b\i\i\d\x\0\r\r\o\y\y\k\0\2\s\9\y\3\m\t\a\q\c\8\a\i\0\s\r\o\7\r\y\3\t\7\b\l\m\6\u\j\p\p\t\v\x\2\e\r\e\e\o\v\k\g\z\l\h\n\z\a\v\p\h\k\8\3\w\1\c\x\s\a\q\j\d\w\z\e\n\n\8\j\4\w\v\x\a\e\r\o\v\2\t\f\i\x\p\3\8\3\i\l\z\c\0\c\t\0\6\3\z\j\y\z\j\9\1\0\s\k\i\t\r\6\4\6\6\8\6\a\s\d\b\m\o\7\2\t\s\l\7\w\d\s\g\k\2\w\j\s\b\u\k\6\w\5\w\9\8\8\2\a\b\a\g\i\w\r\w\x\7\8\n\y\d\x\s\z\y\a\x\n\k\3\p\s\h\d\0\0\7\b\c\0\1\e\m\s\s\s\h\j\3\b\i\7\4\y\i\c\v\8\9\q\7\f\3\b\r\e\0\q\0\n\6\7\u\n\m\u\2\8\i\q\r\i\y\1\a\r\q\3\l\s\l\d\k\e\a\t\3\h\i\3\7\e\s\s\2\4\z\p\2\e\p\f\k\c\b\s\i\e\n\1\i\v\0\i\r\v\x\4\y\j\z\v\y\0\s\n\k\d\g\j\1\y\4\9\x\t\8\4\j\b\r\0\h\i\u\j\x\8\c\v\s\b\d\v\r\h\i\r\e\1\z\b\h\b\j\v\a\b\h\g\q\b\0\s\l\g\p\6\1\d\y\1\3\u\x\d\j\z\8\j\1\8\3\l\5\u\b\x\l\g\3\b\g\h\h\n\q\2\z\7\r\v\p\n\j\t\a\r\5\u\3\o\8\8\c\w\a\j\q\v\b\9\w\y\2\i\h\1\i\u\0\l\n\7\5\y\m\7\r\n\l\8\j\4\v\i\z\a\n\g\g\3\s\o\m\1\4\r\n\4\6\h\t\j\t\3\u\u\u\4\w\o\y\4\y\q\0\d\2\9\n\w\2\l\l\g\8\1\c\1\m\h\x\7\3\p\j\b\i\4\z\w\d\e\h\n\2\z\h\n\e\5\r\w\9\2\i\1\5\b\x\s\f\g\u\o\e\w\a\d\z\t\d\4\q\0\8\0\g\b\o\u\s\p\0\2\r\o\n\g\x\g\n\v\d\2\r\v\a\u\9\x\d\h\h\c\x\l\2\v\d\n\p\t\s\9\i\w\y\f\m\j\w\2\c\h\e\5\3\i\l\a\8\p\i\g\h\a\b\z\6\u\f\k\u\x\g\2\l\h\u\c\h\2\n\o\g\w\r\u\l\6\u\v\s\f\1\6\x\k\i\0\j\3\p\w\z\x\v\a\3\z\k\3\9\n\p\f\n\r\f\g\t\u\o\j\2\r\n\u\z\g\k\o\9\a\k\h\9\4\n\b\r\p\d\n\i\y\o\i\i\o\1\v\0\r\j\4\g\y\l\4\o\v\0\2\x\8\g\l\x\s\l\7\1\3\r\6\6\e\s\7\o\3\e\l\g\s\b\9\o\t\i\t\3\2\a\o\i\w\n\t\k\2\h\p\y\w\e\m\4\5\h\i\r\9\2\b\8\5\0\f\q\m\p\s\h\y\q\y\p\u\3\p\a\f\x\9\y\j\k\s\z\j\b\s\i\6\k\u\4\9\5\h\m\j\l\8\q\1\0\c\l\7\i\t\g\a\b\n\v\5\b\3\3\p\5\w\g\0\7\o\3\4\4\i\v\0\r\m\x\j\f\c\f\u\g\2\m\9\8\b\g\k\y\h\b\7\6\p\1\g\t\2\3\c\p\k\t\r\p\m\5\1\8\p\c\h\5\7\l\s\b\m\9\g\i\n\p\u\4\g\y\s\u\p\8\f\h\4\b\4\k\t\h\p\y\e\d\0\9\6\z\9\h\s\9\l\x\1\r\c\k\p\r\s\k\a\s\q\z\k\s\f\q\m\f\r\k\k\f\2\y\7\s\f\k\r\1\s\q\i\v\p\2\6\y\a\g\b\p\o\a\q\6\j\x\3\0\v\d\y\4\9\3\w\g\k\8\q\9\u\w\y\y\i\a\o\2\4\u\k\u\t\0\4\b\b\e\t\b\k\3\5\m\h\w\a\j\d\m\2\o\x\7\4\p\g\n\t\e\f\f\1\q\8\o\g\9\o\c\7\t\o\m\j\p\s\t\j\n\v\u\a\x\8\3\t\c\v\r\8\3\j\f\b\9\u\a\j\u\6\c\u\2\x\5\q\a\x\j\x\j\u\t\4\4\4\b\o\s\h\v\p\7\c\9\1\j\y\4\5\0\i\6\4\t\n\h\p\m\z\s\4\h\k\x\f\x\r\g\0\h\b\8\l\g\0\8\v\8\l\g\p\r\0\v\e\m\z\v\e\n\0\x\5\h\3\l\4\7\l\n\m\w\w\j\u\j\r\7\j\3\c\x\h\2\w\t\y\0\p\w\7\l\3\k\f\f\y\z\e\m\6\9\m\d\z\1\c\c\h\8\w\c\r\q\m\s\c\x\3\q\u\x\v\u\7\u\e\2\q\q\1\e\a\f\j\k\l\0\i\x\s\q\b\e\b\f\j\k\a\4\o\1\4\2\m\q\l\8\1\6\x\l\n\z\s\8\a\s\y\6\e\y\e\s\8\4\w\l\u\r\e\h\x\v\c\8\m\a\t\6\7\e\5\m\b\k\h\b\4\o\4\g\o\7\3\a\c\z\s\q\o\5\m\0\9\q\c\j\2\m\2\2\k\z\j\v\j\g\6\h\0\f\q\q\n\h\7\t\o\f\g\9\z\5\m\d\q\e\a\7\o\y\q\l\k\p\8\h\d\7\f\w\q\a\c\n\d\i\c\m\f\4\b\p\m\j\t\7\2\r\4\i\4\z\l\4\c\c\c\t\m\r\g\c\c\h\a\i\2\e\t\u\2\x\a\l\c\d\g\7\o\f\s\t\5\l\7\s\i\t\8\y\3\m\4\t\9\s\7\6\y\c\p\0\l\y\d\0\z\r\e\7\s\m\1\9\x\u\z\i\g\r\q\g\0\j\l\y\g\p\1\k\s\6\3\z\q\p\r\y\h\b\b\l\u\3\j\8\9\9\a\3\j\p\g\7\9\e\7\y\l\q\l\b\g\0\i\n\0\s\3\z\4\m\j\8\8\5\m\z\h\l\y\l\s\y\t\b\w\x\a\8\8\4\b\t\s\z\x\s\e\j\d\a\t\h\g\e\u\o\1\p\2\9\z\5\w\a\r\5\6\b\u\2\3\q\3\3\p\9\a\u\e\r\g\d\s\g\z\b\5\3\5\t\j\0\8\h\f\l\8\o\w\u\k\7\v\1\z\e\c\5\p\t\0\c\5\a\k\a\3\9\h\x\0\o\9\6\i\c\9\r\6\s\v\o\8\d\n\w\z\v\e\m\s\3\x\w\i\7\s\i\f\u\d\p\f\d\t\2\8\o\p\k\v\s\e\z\b\9\1\o\o\d\o\d\i\3\l\9\1\o\e\s\u\l\b\k\w\i\a\w\h\4\i\6\r\a\y\t\l\y\i\l\i\z\y\6\p\u\g\6\o\o\q\c\z\c\b\q\q\g\2\9\4\e\f\x\a\2\y\o\w\0\4\z\y\v\3\k\9\5\8\c\e\4\w\t\2\1\e\q\5\f\6\3\6\p\6\y\w\x\v\5\e\g\6\r\8\y\9\b\g\5\r\l\f\g\p\y\6\d\0\n\4\3\q\a\0\5\k\3\7\k\l\a\j\b\1\k\t\f\r\g\j\b\v\2\0\n\h\k\9\j\2\6\1\9\a\l\f\s\m\u\7\g\x\q\3\e\5\7\a\3\p\q\f\v\v\a\1\q\2\t\z\g\3\5\6\b\f\t\v\5\w\u\b\p\u\8\z\3\x\8\k\b\m\d\8\y\1\g\p\a\o\c\1\l\2\4\g\1\d\c\4\2\n\c\5\c\5\m\7\5\2\p\m\r\5\8\r\i\d\x\m\c\3\1\3\h\p\i\m\z\x\0\o\z\2\i\5\b\w\0\w\b\0\m\x\3\q\8\3\i\a\s\7\8\u\t\l\g\m\q\w\z\b\u\h\p\n\l\6\x\5\7\h\k\2\t\s\v\f\m\g\l\4\0\g\5\q\e\2\9\9\0\g\g\0\z\5\d\s\k\f\9\2\g\5\g\l\k\9\l\9\c\k\e\5\a\m\n\d\6\v\7\8\y\i\i\s\1\u\z\a\4\m\f\x\2\w\l\y\l\l\n\p\j\k\e\c\j\t\m\s\f\2\r\r\u\5\b\s\5\y\k\y\1\e\h\d\j\x\s\k\7\0\1\x\s\c\k\q\4\0\p\m\e\w\m\4\c\q\m\4\t\x\c\8\6\k\q\w\c\l\z\a\w\8\k\c\i\c\x\l\c\h\1\s\w\w\q\0\j\c\1\k\3\j\l\t\w\b\w\m\w\6\y\p\7\l\k\d\6\f\a\7\a\0\4\v\r\t\6\m\h\1\b\v\s\n\l\4\2\3\s\h\n\q\1\0\q\5\y\f\1\o\w\z\l\m\4\j\b\f\a\k\o\j\q\6\l\c\d\6\j\v\b\e\f\0\w\e\g\7\m\w\o\6\j\5\g\6\v\l\w\9\u\p\y\7\q\7\s\1\b\s\s\p\8\u\c\9\d\z\5\k\g\3\7\n\o\q\v\q\o\w\d\3\h\5\h\9\r\3\a\r\h\8\2\y\8\f\t\y\1\o\l\o\a\o\g\c\2\2\a\h\l\v\e\q\a\o\w\5\h\m\3\i\x\w\6\z\x\x\6\o\b\t\y\k\y\0\m\r\u\p\3\a\k\q\y\i\v\t\5\6\d\3\4\b\m\z\o\w\o\0\k\q\f\w\o\j\q\e\4\q\t\q\d\3\a\v\m\x\0\e\a\g\q\d\6\3\1\7\2\9\9\p\v\1\s\s\h\z\q\a\5\y\g\b\3\5\d\s\4\w\u\t\m\y\u\e\9\6\e\e\9\z\t\t\a\k\6\d\9\2\9\i\r\s\k\6\x\b\4\t\a\z\5\2\7\0\6\b\n\0\1\4\z\n\h\k\k\f\k\x\o\5\4\3\c\9\9\h\7\8\y\g\i\j\0\6\q\6\1\b\i\o\c\n\d\5\m\d\i\z\8\u\c\j\s\4\t\4\y\d\t\y\z\s\b\j\6\n\9\x\5\t\5\l\w\j\4\j\5\y\w\o\h\6\5\y\k\n\i\u\4\5\4\t\5\k\w\q\d ]] 00:08:11.164 00:08:11.164 real 0m1.020s 00:08:11.164 user 0m0.680s 00:08:11.164 sys 0m0.211s 00:08:11.164 06:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.164 06:32:51 -- common/autotest_common.sh@10 -- # set +x 00:08:11.164 ************************************ 00:08:11.164 END TEST dd_rw_offset 00:08:11.164 ************************************ 00:08:11.164 06:32:51 -- dd/basic_rw.sh@1 -- # cleanup 00:08:11.164 06:32:51 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:11.164 06:32:51 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:11.164 06:32:51 -- dd/common.sh@11 -- # local nvme_ref= 00:08:11.164 06:32:51 -- dd/common.sh@12 -- # local size=0xffff 00:08:11.164 06:32:51 -- dd/common.sh@14 -- # local bs=1048576 00:08:11.164 06:32:51 -- dd/common.sh@15 -- # local count=1 00:08:11.164 06:32:51 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:11.164 06:32:51 -- dd/common.sh@18 -- # gen_conf 00:08:11.164 06:32:51 -- dd/common.sh@31 -- # xtrace_disable 00:08:11.164 06:32:51 -- common/autotest_common.sh@10 -- # set +x 00:08:11.422 [2024-07-12 06:32:51.087780] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:11.422 [2024-07-12 06:32:51.087871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69904 ] 00:08:11.422 { 00:08:11.422 "subsystems": [ 00:08:11.422 { 00:08:11.422 "subsystem": "bdev", 00:08:11.422 "config": [ 00:08:11.422 { 00:08:11.422 "params": { 00:08:11.422 "trtype": "pcie", 00:08:11.422 "traddr": "0000:00:06.0", 00:08:11.422 "name": "Nvme0" 00:08:11.422 }, 00:08:11.422 "method": "bdev_nvme_attach_controller" 00:08:11.422 }, 00:08:11.422 { 00:08:11.422 "method": "bdev_wait_for_examine" 00:08:11.422 } 00:08:11.422 ] 00:08:11.422 } 00:08:11.422 ] 00:08:11.422 } 00:08:11.422 [2024-07-12 06:32:51.227730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.422 [2024-07-12 06:32:51.267845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.681  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:11.681 00:08:11.681 06:32:51 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.681 00:08:11.681 real 0m14.978s 00:08:11.681 user 0m10.673s 00:08:11.681 sys 0m2.797s 00:08:11.681 06:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.681 06:32:51 -- common/autotest_common.sh@10 -- # set +x 00:08:11.681 ************************************ 00:08:11.681 END TEST spdk_dd_basic_rw 00:08:11.681 ************************************ 00:08:11.940 06:32:51 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:11.940 06:32:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.940 06:32:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.940 06:32:51 -- common/autotest_common.sh@10 -- # set +x 00:08:11.940 ************************************ 00:08:11.940 START TEST spdk_dd_posix 00:08:11.940 ************************************ 00:08:11.940 06:32:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:11.940 * Looking for test storage... 00:08:11.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:11.940 06:32:51 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.940 06:32:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.940 06:32:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.940 06:32:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.940 06:32:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.940 06:32:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.940 06:32:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.940 06:32:51 -- paths/export.sh@5 -- # export PATH 00:08:11.940 06:32:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.940 06:32:51 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:11.940 06:32:51 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:11.940 06:32:51 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:11.940 06:32:51 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:11.940 06:32:51 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.940 06:32:51 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.940 06:32:51 -- dd/posix.sh@130 -- # tests 00:08:11.940 06:32:51 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:11.940 * First test run, liburing in use 00:08:11.940 06:32:51 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:11.940 06:32:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.940 06:32:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.940 06:32:51 -- common/autotest_common.sh@10 -- # set +x 00:08:11.940 ************************************ 00:08:11.940 START TEST dd_flag_append 00:08:11.940 ************************************ 00:08:11.940 06:32:51 -- common/autotest_common.sh@1104 -- # append 00:08:11.940 06:32:51 -- dd/posix.sh@16 -- # local dump0 00:08:11.940 06:32:51 -- dd/posix.sh@17 -- # local dump1 00:08:11.940 06:32:51 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:11.940 06:32:51 -- dd/common.sh@98 -- # xtrace_disable 00:08:11.940 06:32:51 -- common/autotest_common.sh@10 -- # set +x 00:08:11.940 06:32:51 -- dd/posix.sh@19 -- # dump0=wkk5i4dgkh4p3gtu0r5x9qntfvgmo87e 00:08:11.940 06:32:51 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:11.940 06:32:51 -- dd/common.sh@98 -- # xtrace_disable 00:08:11.940 06:32:51 -- common/autotest_common.sh@10 -- # set +x 00:08:11.940 06:32:51 -- dd/posix.sh@20 -- # dump1=e8ssri1n4g7e3j59dqlt3p0u8f9727z3 00:08:11.940 06:32:51 -- dd/posix.sh@22 -- # printf %s wkk5i4dgkh4p3gtu0r5x9qntfvgmo87e 00:08:11.940 06:32:51 -- dd/posix.sh@23 -- # printf %s e8ssri1n4g7e3j59dqlt3p0u8f9727z3 00:08:11.940 06:32:51 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:11.940 [2024-07-12 06:32:51.770215] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:11.940 [2024-07-12 06:32:51.770316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69956 ] 00:08:12.198 [2024-07-12 06:32:51.908382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.198 [2024-07-12 06:32:51.958223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.457  Copying: 32/32 [B] (average 31 kBps) 00:08:12.457 00:08:12.457 06:32:52 -- dd/posix.sh@27 -- # [[ e8ssri1n4g7e3j59dqlt3p0u8f9727z3wkk5i4dgkh4p3gtu0r5x9qntfvgmo87e == \e\8\s\s\r\i\1\n\4\g\7\e\3\j\5\9\d\q\l\t\3\p\0\u\8\f\9\7\2\7\z\3\w\k\k\5\i\4\d\g\k\h\4\p\3\g\t\u\0\r\5\x\9\q\n\t\f\v\g\m\o\8\7\e ]] 00:08:12.457 00:08:12.457 real 0m0.472s 00:08:12.457 user 0m0.238s 00:08:12.457 sys 0m0.111s 00:08:12.457 ************************************ 00:08:12.457 END TEST dd_flag_append 00:08:12.457 ************************************ 00:08:12.457 06:32:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.457 06:32:52 -- common/autotest_common.sh@10 -- # set +x 00:08:12.457 06:32:52 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:12.457 06:32:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:12.457 06:32:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.457 06:32:52 -- common/autotest_common.sh@10 -- # set +x 00:08:12.457 ************************************ 00:08:12.457 START TEST dd_flag_directory 00:08:12.457 ************************************ 00:08:12.457 06:32:52 -- common/autotest_common.sh@1104 -- # directory 00:08:12.457 06:32:52 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.457 06:32:52 -- common/autotest_common.sh@640 -- # local es=0 00:08:12.457 06:32:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.457 06:32:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.457 06:32:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:12.457 06:32:52 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.457 06:32:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:12.457 06:32:52 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.457 06:32:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:12.457 06:32:52 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.457 06:32:52 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:12.457 06:32:52 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.457 [2024-07-12 06:32:52.278566] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:12.457 [2024-07-12 06:32:52.278691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69988 ] 00:08:12.716 [2024-07-12 06:32:52.413785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.716 [2024-07-12 06:32:52.455584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.716 [2024-07-12 06:32:52.510822] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:12.716 [2024-07-12 06:32:52.510978] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:12.716 [2024-07-12 06:32:52.511008] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.716 [2024-07-12 06:32:52.583511] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:12.977 06:32:52 -- common/autotest_common.sh@643 -- # es=236 00:08:12.977 06:32:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:12.977 06:32:52 -- common/autotest_common.sh@652 -- # es=108 00:08:12.977 06:32:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:12.977 06:32:52 -- common/autotest_common.sh@660 -- # es=1 00:08:12.977 06:32:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:12.977 06:32:52 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:12.977 06:32:52 -- common/autotest_common.sh@640 -- # local es=0 00:08:12.977 06:32:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:12.977 06:32:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.977 06:32:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:12.977 06:32:52 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.977 06:32:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:12.977 06:32:52 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.977 06:32:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:12.977 06:32:52 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.977 06:32:52 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:12.977 06:32:52 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:12.977 [2024-07-12 06:32:52.724905] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:12.977 [2024-07-12 06:32:52.725084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69992 ] 00:08:12.977 [2024-07-12 06:32:52.865759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.235 [2024-07-12 06:32:52.909101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.235 [2024-07-12 06:32:52.955563] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:13.235 [2024-07-12 06:32:52.955625] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:13.235 [2024-07-12 06:32:52.955639] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.235 [2024-07-12 06:32:53.021756] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:13.235 06:32:53 -- common/autotest_common.sh@643 -- # es=236 00:08:13.235 06:32:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:13.235 06:32:53 -- common/autotest_common.sh@652 -- # es=108 00:08:13.235 06:32:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:13.235 06:32:53 -- common/autotest_common.sh@660 -- # es=1 00:08:13.235 06:32:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:13.235 00:08:13.235 real 0m0.867s 00:08:13.235 user 0m0.438s 00:08:13.235 sys 0m0.217s 00:08:13.235 06:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.235 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:08:13.235 ************************************ 00:08:13.235 END TEST dd_flag_directory 00:08:13.235 ************************************ 00:08:13.235 06:32:53 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:13.235 06:32:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:13.235 06:32:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.235 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:08:13.235 ************************************ 00:08:13.235 START TEST dd_flag_nofollow 00:08:13.235 ************************************ 00:08:13.235 06:32:53 -- common/autotest_common.sh@1104 -- # nofollow 00:08:13.235 06:32:53 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:13.235 06:32:53 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:13.235 06:32:53 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:13.235 06:32:53 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:13.235 06:32:53 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.235 06:32:53 -- common/autotest_common.sh@640 -- # local es=0 00:08:13.235 06:32:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.235 06:32:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.235 06:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.235 06:32:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.235 06:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.235 06:32:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.235 06:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.235 06:32:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.235 06:32:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.235 06:32:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.492 [2024-07-12 06:32:53.185354] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:13.492 [2024-07-12 06:32:53.185438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70025 ] 00:08:13.492 [2024-07-12 06:32:53.318917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.492 [2024-07-12 06:32:53.354803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.492 [2024-07-12 06:32:53.400243] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:13.492 [2024-07-12 06:32:53.400301] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:13.492 [2024-07-12 06:32:53.400316] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.751 [2024-07-12 06:32:53.462654] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:13.751 06:32:53 -- common/autotest_common.sh@643 -- # es=216 00:08:13.751 06:32:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:13.751 06:32:53 -- common/autotest_common.sh@652 -- # es=88 00:08:13.751 06:32:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:13.751 06:32:53 -- common/autotest_common.sh@660 -- # es=1 00:08:13.751 06:32:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:13.751 06:32:53 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:13.751 06:32:53 -- common/autotest_common.sh@640 -- # local es=0 00:08:13.751 06:32:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:13.751 06:32:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.751 06:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.751 06:32:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.751 06:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.751 06:32:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.751 06:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:13.751 06:32:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.751 06:32:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.751 06:32:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:13.751 [2024-07-12 06:32:53.589192] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:13.751 [2024-07-12 06:32:53.589317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70030 ] 00:08:14.009 [2024-07-12 06:32:53.734007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.009 [2024-07-12 06:32:53.772701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.009 [2024-07-12 06:32:53.825101] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:14.009 [2024-07-12 06:32:53.825183] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:14.009 [2024-07-12 06:32:53.825209] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.009 [2024-07-12 06:32:53.888922] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:14.267 06:32:53 -- common/autotest_common.sh@643 -- # es=216 00:08:14.267 06:32:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:14.267 06:32:53 -- common/autotest_common.sh@652 -- # es=88 00:08:14.267 06:32:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:14.267 06:32:53 -- common/autotest_common.sh@660 -- # es=1 00:08:14.267 06:32:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:14.267 06:32:53 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:14.267 06:32:53 -- dd/common.sh@98 -- # xtrace_disable 00:08:14.267 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:08:14.267 06:32:53 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.267 [2024-07-12 06:32:54.001988] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:14.267 [2024-07-12 06:32:54.002097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70038 ] 00:08:14.267 [2024-07-12 06:32:54.140315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.267 [2024-07-12 06:32:54.183293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.525  Copying: 512/512 [B] (average 500 kBps) 00:08:14.525 00:08:14.525 06:32:54 -- dd/posix.sh@49 -- # [[ dy5tt4frwczi3ymjghj266c8h59rwo60hfzt3be9a3ckltnuptfuhu79fu9ia89sog9vssph956cftvkwf0r0mto2zf29ufi2vsjuqav6ft89md47tf7xkck0hbm9yx89s6s0mkdn6yl32l1z1tlj73j2q7qyqamvrsf6x04e5bksupbb0faxx7i9rm6gl2k531nq4v36p74e4rgi6rbg30ar6gxm9xkecnmzo8hwyoj0gpgiubapuj8pif7br7ijqbvwqgvtr0rsb6m44dhvmr8oelho6nz91vgyfele5zz57eeo89h7lnmk6dpix5tgwyalrd47d9wyqaaegu1sxe4pegstpo5qr7alvlo0omxxnhj2357ff8hs1ew7dzyn0i3o02a57y8o36is9asvitci2ceexacpwome9avghijr88gwrx2zx2pc6ezaltfw1elc7wv1iwaswgls23jawqfrm2v8j635edgltllwqpy083l8ctw8pld2nyec253 == \d\y\5\t\t\4\f\r\w\c\z\i\3\y\m\j\g\h\j\2\6\6\c\8\h\5\9\r\w\o\6\0\h\f\z\t\3\b\e\9\a\3\c\k\l\t\n\u\p\t\f\u\h\u\7\9\f\u\9\i\a\8\9\s\o\g\9\v\s\s\p\h\9\5\6\c\f\t\v\k\w\f\0\r\0\m\t\o\2\z\f\2\9\u\f\i\2\v\s\j\u\q\a\v\6\f\t\8\9\m\d\4\7\t\f\7\x\k\c\k\0\h\b\m\9\y\x\8\9\s\6\s\0\m\k\d\n\6\y\l\3\2\l\1\z\1\t\l\j\7\3\j\2\q\7\q\y\q\a\m\v\r\s\f\6\x\0\4\e\5\b\k\s\u\p\b\b\0\f\a\x\x\7\i\9\r\m\6\g\l\2\k\5\3\1\n\q\4\v\3\6\p\7\4\e\4\r\g\i\6\r\b\g\3\0\a\r\6\g\x\m\9\x\k\e\c\n\m\z\o\8\h\w\y\o\j\0\g\p\g\i\u\b\a\p\u\j\8\p\i\f\7\b\r\7\i\j\q\b\v\w\q\g\v\t\r\0\r\s\b\6\m\4\4\d\h\v\m\r\8\o\e\l\h\o\6\n\z\9\1\v\g\y\f\e\l\e\5\z\z\5\7\e\e\o\8\9\h\7\l\n\m\k\6\d\p\i\x\5\t\g\w\y\a\l\r\d\4\7\d\9\w\y\q\a\a\e\g\u\1\s\x\e\4\p\e\g\s\t\p\o\5\q\r\7\a\l\v\l\o\0\o\m\x\x\n\h\j\2\3\5\7\f\f\8\h\s\1\e\w\7\d\z\y\n\0\i\3\o\0\2\a\5\7\y\8\o\3\6\i\s\9\a\s\v\i\t\c\i\2\c\e\e\x\a\c\p\w\o\m\e\9\a\v\g\h\i\j\r\8\8\g\w\r\x\2\z\x\2\p\c\6\e\z\a\l\t\f\w\1\e\l\c\7\w\v\1\i\w\a\s\w\g\l\s\2\3\j\a\w\q\f\r\m\2\v\8\j\6\3\5\e\d\g\l\t\l\l\w\q\p\y\0\8\3\l\8\c\t\w\8\p\l\d\2\n\y\e\c\2\5\3 ]] 00:08:14.525 00:08:14.525 real 0m1.263s 00:08:14.525 user 0m0.642s 00:08:14.525 sys 0m0.286s 00:08:14.525 06:32:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.525 06:32:54 -- common/autotest_common.sh@10 -- # set +x 00:08:14.525 ************************************ 00:08:14.525 END TEST dd_flag_nofollow 00:08:14.525 ************************************ 00:08:14.525 06:32:54 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:14.525 06:32:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:14.525 06:32:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.525 06:32:54 -- common/autotest_common.sh@10 -- # set +x 00:08:14.783 ************************************ 00:08:14.783 START TEST dd_flag_noatime 00:08:14.783 ************************************ 00:08:14.783 06:32:54 -- common/autotest_common.sh@1104 -- # noatime 00:08:14.783 06:32:54 -- dd/posix.sh@53 -- # local atime_if 00:08:14.783 06:32:54 -- dd/posix.sh@54 -- # local atime_of 00:08:14.783 06:32:54 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:14.783 06:32:54 -- dd/common.sh@98 -- # xtrace_disable 00:08:14.783 06:32:54 -- common/autotest_common.sh@10 -- # set +x 00:08:14.783 06:32:54 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.783 06:32:54 -- dd/posix.sh@60 -- # atime_if=1720765974 00:08:14.783 06:32:54 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.783 06:32:54 -- dd/posix.sh@61 -- # atime_of=1720765974 00:08:14.783 06:32:54 -- dd/posix.sh@66 -- # sleep 1 00:08:15.717 06:32:55 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.717 [2024-07-12 06:32:55.519016] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:15.717 [2024-07-12 06:32:55.519179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70078 ] 00:08:15.977 [2024-07-12 06:32:55.663556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.977 [2024-07-12 06:32:55.699492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.235  Copying: 512/512 [B] (average 500 kBps) 00:08:16.235 00:08:16.235 06:32:55 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.235 06:32:55 -- dd/posix.sh@69 -- # (( atime_if == 1720765974 )) 00:08:16.235 06:32:55 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.235 06:32:55 -- dd/posix.sh@70 -- # (( atime_of == 1720765974 )) 00:08:16.235 06:32:55 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.235 [2024-07-12 06:32:55.982348] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:16.235 [2024-07-12 06:32:55.982483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70090 ] 00:08:16.235 [2024-07-12 06:32:56.123868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.493 [2024-07-12 06:32:56.160126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.493  Copying: 512/512 [B] (average 500 kBps) 00:08:16.493 00:08:16.493 06:32:56 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.493 06:32:56 -- dd/posix.sh@73 -- # (( atime_if < 1720765976 )) 00:08:16.493 00:08:16.493 real 0m1.929s 00:08:16.493 user 0m0.446s 00:08:16.493 sys 0m0.221s 00:08:16.493 06:32:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.493 06:32:56 -- common/autotest_common.sh@10 -- # set +x 00:08:16.493 ************************************ 00:08:16.493 END TEST dd_flag_noatime 00:08:16.493 ************************************ 00:08:16.493 06:32:56 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:16.493 06:32:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:16.493 06:32:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.493 06:32:56 -- common/autotest_common.sh@10 -- # set +x 00:08:16.752 ************************************ 00:08:16.752 START TEST dd_flags_misc 00:08:16.752 ************************************ 00:08:16.752 06:32:56 -- common/autotest_common.sh@1104 -- # io 00:08:16.752 06:32:56 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:16.752 06:32:56 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:16.752 06:32:56 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:16.752 06:32:56 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:16.752 06:32:56 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:16.752 06:32:56 -- dd/common.sh@98 -- # xtrace_disable 00:08:16.752 06:32:56 -- common/autotest_common.sh@10 -- # set +x 00:08:16.752 06:32:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:16.752 06:32:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:16.752 [2024-07-12 06:32:56.479396] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:16.752 [2024-07-12 06:32:56.479529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70116 ] 00:08:16.752 [2024-07-12 06:32:56.616469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.752 [2024-07-12 06:32:56.652728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.012  Copying: 512/512 [B] (average 500 kBps) 00:08:17.012 00:08:17.012 06:32:56 -- dd/posix.sh@93 -- # [[ 0git7ej7oyg69msqlwkhpoc8y74b9qjkjgk6lrwyo0tgkqqhhcj7jcvccym8o5h39be5fdna2ew9qd2tybz9ln599vq1ujl520erxdfve3pibkzo1a0dpk2tk17ld476mma0wzzte4j0b8orjh0rr4ixm95cyrat6t0hwlbyygbsgjrcykqmykk1af9j62c743jfn0gr2v8w3zma3an4yxhzv16lpaecqhpuxmni4axq4bv4unurp3zv12epk17w1p9rtnu4s7dlojbxqjog06ymphttsm46oorgzaobqlpdjhqrv72t9dh9z67gtdswdaiqiv4dub8stgvmprki9lumhwebig2e5o3usfizpkwrssx3birka7uam8p7vcq2fn7gwm334bg8uwv49nghcpz9i5gqslgpdl8mtfi9pngig52c7uhfjiijzqkraf9uvt9b5e5ssmjqy74mp7x2snh25oe984mxzijyb2ebephsib7lok2njbwfezp7udjl == \0\g\i\t\7\e\j\7\o\y\g\6\9\m\s\q\l\w\k\h\p\o\c\8\y\7\4\b\9\q\j\k\j\g\k\6\l\r\w\y\o\0\t\g\k\q\q\h\h\c\j\7\j\c\v\c\c\y\m\8\o\5\h\3\9\b\e\5\f\d\n\a\2\e\w\9\q\d\2\t\y\b\z\9\l\n\5\9\9\v\q\1\u\j\l\5\2\0\e\r\x\d\f\v\e\3\p\i\b\k\z\o\1\a\0\d\p\k\2\t\k\1\7\l\d\4\7\6\m\m\a\0\w\z\z\t\e\4\j\0\b\8\o\r\j\h\0\r\r\4\i\x\m\9\5\c\y\r\a\t\6\t\0\h\w\l\b\y\y\g\b\s\g\j\r\c\y\k\q\m\y\k\k\1\a\f\9\j\6\2\c\7\4\3\j\f\n\0\g\r\2\v\8\w\3\z\m\a\3\a\n\4\y\x\h\z\v\1\6\l\p\a\e\c\q\h\p\u\x\m\n\i\4\a\x\q\4\b\v\4\u\n\u\r\p\3\z\v\1\2\e\p\k\1\7\w\1\p\9\r\t\n\u\4\s\7\d\l\o\j\b\x\q\j\o\g\0\6\y\m\p\h\t\t\s\m\4\6\o\o\r\g\z\a\o\b\q\l\p\d\j\h\q\r\v\7\2\t\9\d\h\9\z\6\7\g\t\d\s\w\d\a\i\q\i\v\4\d\u\b\8\s\t\g\v\m\p\r\k\i\9\l\u\m\h\w\e\b\i\g\2\e\5\o\3\u\s\f\i\z\p\k\w\r\s\s\x\3\b\i\r\k\a\7\u\a\m\8\p\7\v\c\q\2\f\n\7\g\w\m\3\3\4\b\g\8\u\w\v\4\9\n\g\h\c\p\z\9\i\5\g\q\s\l\g\p\d\l\8\m\t\f\i\9\p\n\g\i\g\5\2\c\7\u\h\f\j\i\i\j\z\q\k\r\a\f\9\u\v\t\9\b\5\e\5\s\s\m\j\q\y\7\4\m\p\7\x\2\s\n\h\2\5\o\e\9\8\4\m\x\z\i\j\y\b\2\e\b\e\p\h\s\i\b\7\l\o\k\2\n\j\b\w\f\e\z\p\7\u\d\j\l ]] 00:08:17.012 06:32:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.012 06:32:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:17.012 [2024-07-12 06:32:56.911092] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:17.012 [2024-07-12 06:32:56.911222] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70124 ] 00:08:17.272 [2024-07-12 06:32:57.057088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.272 [2024-07-12 06:32:57.093554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.529  Copying: 512/512 [B] (average 500 kBps) 00:08:17.529 00:08:17.529 06:32:57 -- dd/posix.sh@93 -- # [[ 0git7ej7oyg69msqlwkhpoc8y74b9qjkjgk6lrwyo0tgkqqhhcj7jcvccym8o5h39be5fdna2ew9qd2tybz9ln599vq1ujl520erxdfve3pibkzo1a0dpk2tk17ld476mma0wzzte4j0b8orjh0rr4ixm95cyrat6t0hwlbyygbsgjrcykqmykk1af9j62c743jfn0gr2v8w3zma3an4yxhzv16lpaecqhpuxmni4axq4bv4unurp3zv12epk17w1p9rtnu4s7dlojbxqjog06ymphttsm46oorgzaobqlpdjhqrv72t9dh9z67gtdswdaiqiv4dub8stgvmprki9lumhwebig2e5o3usfizpkwrssx3birka7uam8p7vcq2fn7gwm334bg8uwv49nghcpz9i5gqslgpdl8mtfi9pngig52c7uhfjiijzqkraf9uvt9b5e5ssmjqy74mp7x2snh25oe984mxzijyb2ebephsib7lok2njbwfezp7udjl == \0\g\i\t\7\e\j\7\o\y\g\6\9\m\s\q\l\w\k\h\p\o\c\8\y\7\4\b\9\q\j\k\j\g\k\6\l\r\w\y\o\0\t\g\k\q\q\h\h\c\j\7\j\c\v\c\c\y\m\8\o\5\h\3\9\b\e\5\f\d\n\a\2\e\w\9\q\d\2\t\y\b\z\9\l\n\5\9\9\v\q\1\u\j\l\5\2\0\e\r\x\d\f\v\e\3\p\i\b\k\z\o\1\a\0\d\p\k\2\t\k\1\7\l\d\4\7\6\m\m\a\0\w\z\z\t\e\4\j\0\b\8\o\r\j\h\0\r\r\4\i\x\m\9\5\c\y\r\a\t\6\t\0\h\w\l\b\y\y\g\b\s\g\j\r\c\y\k\q\m\y\k\k\1\a\f\9\j\6\2\c\7\4\3\j\f\n\0\g\r\2\v\8\w\3\z\m\a\3\a\n\4\y\x\h\z\v\1\6\l\p\a\e\c\q\h\p\u\x\m\n\i\4\a\x\q\4\b\v\4\u\n\u\r\p\3\z\v\1\2\e\p\k\1\7\w\1\p\9\r\t\n\u\4\s\7\d\l\o\j\b\x\q\j\o\g\0\6\y\m\p\h\t\t\s\m\4\6\o\o\r\g\z\a\o\b\q\l\p\d\j\h\q\r\v\7\2\t\9\d\h\9\z\6\7\g\t\d\s\w\d\a\i\q\i\v\4\d\u\b\8\s\t\g\v\m\p\r\k\i\9\l\u\m\h\w\e\b\i\g\2\e\5\o\3\u\s\f\i\z\p\k\w\r\s\s\x\3\b\i\r\k\a\7\u\a\m\8\p\7\v\c\q\2\f\n\7\g\w\m\3\3\4\b\g\8\u\w\v\4\9\n\g\h\c\p\z\9\i\5\g\q\s\l\g\p\d\l\8\m\t\f\i\9\p\n\g\i\g\5\2\c\7\u\h\f\j\i\i\j\z\q\k\r\a\f\9\u\v\t\9\b\5\e\5\s\s\m\j\q\y\7\4\m\p\7\x\2\s\n\h\2\5\o\e\9\8\4\m\x\z\i\j\y\b\2\e\b\e\p\h\s\i\b\7\l\o\k\2\n\j\b\w\f\e\z\p\7\u\d\j\l ]] 00:08:17.529 06:32:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.530 06:32:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:17.530 [2024-07-12 06:32:57.352120] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:17.530 [2024-07-12 06:32:57.352280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70126 ] 00:08:17.788 [2024-07-12 06:32:57.493947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.788 [2024-07-12 06:32:57.537162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.047  Copying: 512/512 [B] (average 166 kBps) 00:08:18.047 00:08:18.047 06:32:57 -- dd/posix.sh@93 -- # [[ 0git7ej7oyg69msqlwkhpoc8y74b9qjkjgk6lrwyo0tgkqqhhcj7jcvccym8o5h39be5fdna2ew9qd2tybz9ln599vq1ujl520erxdfve3pibkzo1a0dpk2tk17ld476mma0wzzte4j0b8orjh0rr4ixm95cyrat6t0hwlbyygbsgjrcykqmykk1af9j62c743jfn0gr2v8w3zma3an4yxhzv16lpaecqhpuxmni4axq4bv4unurp3zv12epk17w1p9rtnu4s7dlojbxqjog06ymphttsm46oorgzaobqlpdjhqrv72t9dh9z67gtdswdaiqiv4dub8stgvmprki9lumhwebig2e5o3usfizpkwrssx3birka7uam8p7vcq2fn7gwm334bg8uwv49nghcpz9i5gqslgpdl8mtfi9pngig52c7uhfjiijzqkraf9uvt9b5e5ssmjqy74mp7x2snh25oe984mxzijyb2ebephsib7lok2njbwfezp7udjl == \0\g\i\t\7\e\j\7\o\y\g\6\9\m\s\q\l\w\k\h\p\o\c\8\y\7\4\b\9\q\j\k\j\g\k\6\l\r\w\y\o\0\t\g\k\q\q\h\h\c\j\7\j\c\v\c\c\y\m\8\o\5\h\3\9\b\e\5\f\d\n\a\2\e\w\9\q\d\2\t\y\b\z\9\l\n\5\9\9\v\q\1\u\j\l\5\2\0\e\r\x\d\f\v\e\3\p\i\b\k\z\o\1\a\0\d\p\k\2\t\k\1\7\l\d\4\7\6\m\m\a\0\w\z\z\t\e\4\j\0\b\8\o\r\j\h\0\r\r\4\i\x\m\9\5\c\y\r\a\t\6\t\0\h\w\l\b\y\y\g\b\s\g\j\r\c\y\k\q\m\y\k\k\1\a\f\9\j\6\2\c\7\4\3\j\f\n\0\g\r\2\v\8\w\3\z\m\a\3\a\n\4\y\x\h\z\v\1\6\l\p\a\e\c\q\h\p\u\x\m\n\i\4\a\x\q\4\b\v\4\u\n\u\r\p\3\z\v\1\2\e\p\k\1\7\w\1\p\9\r\t\n\u\4\s\7\d\l\o\j\b\x\q\j\o\g\0\6\y\m\p\h\t\t\s\m\4\6\o\o\r\g\z\a\o\b\q\l\p\d\j\h\q\r\v\7\2\t\9\d\h\9\z\6\7\g\t\d\s\w\d\a\i\q\i\v\4\d\u\b\8\s\t\g\v\m\p\r\k\i\9\l\u\m\h\w\e\b\i\g\2\e\5\o\3\u\s\f\i\z\p\k\w\r\s\s\x\3\b\i\r\k\a\7\u\a\m\8\p\7\v\c\q\2\f\n\7\g\w\m\3\3\4\b\g\8\u\w\v\4\9\n\g\h\c\p\z\9\i\5\g\q\s\l\g\p\d\l\8\m\t\f\i\9\p\n\g\i\g\5\2\c\7\u\h\f\j\i\i\j\z\q\k\r\a\f\9\u\v\t\9\b\5\e\5\s\s\m\j\q\y\7\4\m\p\7\x\2\s\n\h\2\5\o\e\9\8\4\m\x\z\i\j\y\b\2\e\b\e\p\h\s\i\b\7\l\o\k\2\n\j\b\w\f\e\z\p\7\u\d\j\l ]] 00:08:18.047 06:32:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.047 06:32:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:18.047 [2024-07-12 06:32:57.803463] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:18.047 [2024-07-12 06:32:57.803586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70133 ] 00:08:18.047 [2024-07-12 06:32:57.940747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.306 [2024-07-12 06:32:57.976023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.306  Copying: 512/512 [B] (average 500 kBps) 00:08:18.306 00:08:18.306 06:32:58 -- dd/posix.sh@93 -- # [[ 0git7ej7oyg69msqlwkhpoc8y74b9qjkjgk6lrwyo0tgkqqhhcj7jcvccym8o5h39be5fdna2ew9qd2tybz9ln599vq1ujl520erxdfve3pibkzo1a0dpk2tk17ld476mma0wzzte4j0b8orjh0rr4ixm95cyrat6t0hwlbyygbsgjrcykqmykk1af9j62c743jfn0gr2v8w3zma3an4yxhzv16lpaecqhpuxmni4axq4bv4unurp3zv12epk17w1p9rtnu4s7dlojbxqjog06ymphttsm46oorgzaobqlpdjhqrv72t9dh9z67gtdswdaiqiv4dub8stgvmprki9lumhwebig2e5o3usfizpkwrssx3birka7uam8p7vcq2fn7gwm334bg8uwv49nghcpz9i5gqslgpdl8mtfi9pngig52c7uhfjiijzqkraf9uvt9b5e5ssmjqy74mp7x2snh25oe984mxzijyb2ebephsib7lok2njbwfezp7udjl == \0\g\i\t\7\e\j\7\o\y\g\6\9\m\s\q\l\w\k\h\p\o\c\8\y\7\4\b\9\q\j\k\j\g\k\6\l\r\w\y\o\0\t\g\k\q\q\h\h\c\j\7\j\c\v\c\c\y\m\8\o\5\h\3\9\b\e\5\f\d\n\a\2\e\w\9\q\d\2\t\y\b\z\9\l\n\5\9\9\v\q\1\u\j\l\5\2\0\e\r\x\d\f\v\e\3\p\i\b\k\z\o\1\a\0\d\p\k\2\t\k\1\7\l\d\4\7\6\m\m\a\0\w\z\z\t\e\4\j\0\b\8\o\r\j\h\0\r\r\4\i\x\m\9\5\c\y\r\a\t\6\t\0\h\w\l\b\y\y\g\b\s\g\j\r\c\y\k\q\m\y\k\k\1\a\f\9\j\6\2\c\7\4\3\j\f\n\0\g\r\2\v\8\w\3\z\m\a\3\a\n\4\y\x\h\z\v\1\6\l\p\a\e\c\q\h\p\u\x\m\n\i\4\a\x\q\4\b\v\4\u\n\u\r\p\3\z\v\1\2\e\p\k\1\7\w\1\p\9\r\t\n\u\4\s\7\d\l\o\j\b\x\q\j\o\g\0\6\y\m\p\h\t\t\s\m\4\6\o\o\r\g\z\a\o\b\q\l\p\d\j\h\q\r\v\7\2\t\9\d\h\9\z\6\7\g\t\d\s\w\d\a\i\q\i\v\4\d\u\b\8\s\t\g\v\m\p\r\k\i\9\l\u\m\h\w\e\b\i\g\2\e\5\o\3\u\s\f\i\z\p\k\w\r\s\s\x\3\b\i\r\k\a\7\u\a\m\8\p\7\v\c\q\2\f\n\7\g\w\m\3\3\4\b\g\8\u\w\v\4\9\n\g\h\c\p\z\9\i\5\g\q\s\l\g\p\d\l\8\m\t\f\i\9\p\n\g\i\g\5\2\c\7\u\h\f\j\i\i\j\z\q\k\r\a\f\9\u\v\t\9\b\5\e\5\s\s\m\j\q\y\7\4\m\p\7\x\2\s\n\h\2\5\o\e\9\8\4\m\x\z\i\j\y\b\2\e\b\e\p\h\s\i\b\7\l\o\k\2\n\j\b\w\f\e\z\p\7\u\d\j\l ]] 00:08:18.306 06:32:58 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:18.306 06:32:58 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:18.306 06:32:58 -- dd/common.sh@98 -- # xtrace_disable 00:08:18.306 06:32:58 -- common/autotest_common.sh@10 -- # set +x 00:08:18.306 06:32:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.306 06:32:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:18.564 [2024-07-12 06:32:58.226377] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:18.564 [2024-07-12 06:32:58.226465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70141 ] 00:08:18.564 [2024-07-12 06:32:58.360864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.564 [2024-07-12 06:32:58.394826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.822  Copying: 512/512 [B] (average 500 kBps) 00:08:18.822 00:08:18.822 06:32:58 -- dd/posix.sh@93 -- # [[ i7c17aze8vx0j1rr9kqlo9q9ifb57vcru9tpsgpl7e7xwzss9u1mexb2oqp4vqelqqqlt18unftf5jkz3nulkwqjx8wfpg7kljzhnffc8l7h8vov17cs4cyt9bs6v4kyzggpfy3clrhx381rt2eob9s9jrob2smnh1xw970qym47oj0fjbwtt3f30zlwh5601epsuqq8jt3ainu82m00o9xixx2mg7sxm9yf3iki9q1f14ea94mpy5xljx4ykv0ihkknfrsb37rerx788tkvmkw4se1cswgrti0axmu4t6377jzpknxfzvpdbli0fsr1l3nsf9nh5i1l76nu2fofvsps7zdqgevc05pxuevmnq8a40f94ofef2si26tehpf99awi2uye18vh8wic3u0xpezcrc7fqreeqsiwvz1labhlzr23nag0qf4q3r1fec4t0p9jm56b1nvxs3baq9zzh5gvgjh62a0tierj6683ef23hjshxanexrjxxduigyc8 == \i\7\c\1\7\a\z\e\8\v\x\0\j\1\r\r\9\k\q\l\o\9\q\9\i\f\b\5\7\v\c\r\u\9\t\p\s\g\p\l\7\e\7\x\w\z\s\s\9\u\1\m\e\x\b\2\o\q\p\4\v\q\e\l\q\q\q\l\t\1\8\u\n\f\t\f\5\j\k\z\3\n\u\l\k\w\q\j\x\8\w\f\p\g\7\k\l\j\z\h\n\f\f\c\8\l\7\h\8\v\o\v\1\7\c\s\4\c\y\t\9\b\s\6\v\4\k\y\z\g\g\p\f\y\3\c\l\r\h\x\3\8\1\r\t\2\e\o\b\9\s\9\j\r\o\b\2\s\m\n\h\1\x\w\9\7\0\q\y\m\4\7\o\j\0\f\j\b\w\t\t\3\f\3\0\z\l\w\h\5\6\0\1\e\p\s\u\q\q\8\j\t\3\a\i\n\u\8\2\m\0\0\o\9\x\i\x\x\2\m\g\7\s\x\m\9\y\f\3\i\k\i\9\q\1\f\1\4\e\a\9\4\m\p\y\5\x\l\j\x\4\y\k\v\0\i\h\k\k\n\f\r\s\b\3\7\r\e\r\x\7\8\8\t\k\v\m\k\w\4\s\e\1\c\s\w\g\r\t\i\0\a\x\m\u\4\t\6\3\7\7\j\z\p\k\n\x\f\z\v\p\d\b\l\i\0\f\s\r\1\l\3\n\s\f\9\n\h\5\i\1\l\7\6\n\u\2\f\o\f\v\s\p\s\7\z\d\q\g\e\v\c\0\5\p\x\u\e\v\m\n\q\8\a\4\0\f\9\4\o\f\e\f\2\s\i\2\6\t\e\h\p\f\9\9\a\w\i\2\u\y\e\1\8\v\h\8\w\i\c\3\u\0\x\p\e\z\c\r\c\7\f\q\r\e\e\q\s\i\w\v\z\1\l\a\b\h\l\z\r\2\3\n\a\g\0\q\f\4\q\3\r\1\f\e\c\4\t\0\p\9\j\m\5\6\b\1\n\v\x\s\3\b\a\q\9\z\z\h\5\g\v\g\j\h\6\2\a\0\t\i\e\r\j\6\6\8\3\e\f\2\3\h\j\s\h\x\a\n\e\x\r\j\x\x\d\u\i\g\y\c\8 ]] 00:08:18.822 06:32:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.822 06:32:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:18.822 [2024-07-12 06:32:58.654814] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:18.822 [2024-07-12 06:32:58.654910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70148 ] 00:08:19.080 [2024-07-12 06:32:58.786284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.080 [2024-07-12 06:32:58.823171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.338  Copying: 512/512 [B] (average 500 kBps) 00:08:19.338 00:08:19.338 06:32:59 -- dd/posix.sh@93 -- # [[ i7c17aze8vx0j1rr9kqlo9q9ifb57vcru9tpsgpl7e7xwzss9u1mexb2oqp4vqelqqqlt18unftf5jkz3nulkwqjx8wfpg7kljzhnffc8l7h8vov17cs4cyt9bs6v4kyzggpfy3clrhx381rt2eob9s9jrob2smnh1xw970qym47oj0fjbwtt3f30zlwh5601epsuqq8jt3ainu82m00o9xixx2mg7sxm9yf3iki9q1f14ea94mpy5xljx4ykv0ihkknfrsb37rerx788tkvmkw4se1cswgrti0axmu4t6377jzpknxfzvpdbli0fsr1l3nsf9nh5i1l76nu2fofvsps7zdqgevc05pxuevmnq8a40f94ofef2si26tehpf99awi2uye18vh8wic3u0xpezcrc7fqreeqsiwvz1labhlzr23nag0qf4q3r1fec4t0p9jm56b1nvxs3baq9zzh5gvgjh62a0tierj6683ef23hjshxanexrjxxduigyc8 == \i\7\c\1\7\a\z\e\8\v\x\0\j\1\r\r\9\k\q\l\o\9\q\9\i\f\b\5\7\v\c\r\u\9\t\p\s\g\p\l\7\e\7\x\w\z\s\s\9\u\1\m\e\x\b\2\o\q\p\4\v\q\e\l\q\q\q\l\t\1\8\u\n\f\t\f\5\j\k\z\3\n\u\l\k\w\q\j\x\8\w\f\p\g\7\k\l\j\z\h\n\f\f\c\8\l\7\h\8\v\o\v\1\7\c\s\4\c\y\t\9\b\s\6\v\4\k\y\z\g\g\p\f\y\3\c\l\r\h\x\3\8\1\r\t\2\e\o\b\9\s\9\j\r\o\b\2\s\m\n\h\1\x\w\9\7\0\q\y\m\4\7\o\j\0\f\j\b\w\t\t\3\f\3\0\z\l\w\h\5\6\0\1\e\p\s\u\q\q\8\j\t\3\a\i\n\u\8\2\m\0\0\o\9\x\i\x\x\2\m\g\7\s\x\m\9\y\f\3\i\k\i\9\q\1\f\1\4\e\a\9\4\m\p\y\5\x\l\j\x\4\y\k\v\0\i\h\k\k\n\f\r\s\b\3\7\r\e\r\x\7\8\8\t\k\v\m\k\w\4\s\e\1\c\s\w\g\r\t\i\0\a\x\m\u\4\t\6\3\7\7\j\z\p\k\n\x\f\z\v\p\d\b\l\i\0\f\s\r\1\l\3\n\s\f\9\n\h\5\i\1\l\7\6\n\u\2\f\o\f\v\s\p\s\7\z\d\q\g\e\v\c\0\5\p\x\u\e\v\m\n\q\8\a\4\0\f\9\4\o\f\e\f\2\s\i\2\6\t\e\h\p\f\9\9\a\w\i\2\u\y\e\1\8\v\h\8\w\i\c\3\u\0\x\p\e\z\c\r\c\7\f\q\r\e\e\q\s\i\w\v\z\1\l\a\b\h\l\z\r\2\3\n\a\g\0\q\f\4\q\3\r\1\f\e\c\4\t\0\p\9\j\m\5\6\b\1\n\v\x\s\3\b\a\q\9\z\z\h\5\g\v\g\j\h\6\2\a\0\t\i\e\r\j\6\6\8\3\e\f\2\3\h\j\s\h\x\a\n\e\x\r\j\x\x\d\u\i\g\y\c\8 ]] 00:08:19.338 06:32:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.338 06:32:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:19.338 [2024-07-12 06:32:59.085383] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:19.338 [2024-07-12 06:32:59.085550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70156 ] 00:08:19.338 [2024-07-12 06:32:59.232265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.596 [2024-07-12 06:32:59.269031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.596  Copying: 512/512 [B] (average 250 kBps) 00:08:19.596 00:08:19.596 06:32:59 -- dd/posix.sh@93 -- # [[ i7c17aze8vx0j1rr9kqlo9q9ifb57vcru9tpsgpl7e7xwzss9u1mexb2oqp4vqelqqqlt18unftf5jkz3nulkwqjx8wfpg7kljzhnffc8l7h8vov17cs4cyt9bs6v4kyzggpfy3clrhx381rt2eob9s9jrob2smnh1xw970qym47oj0fjbwtt3f30zlwh5601epsuqq8jt3ainu82m00o9xixx2mg7sxm9yf3iki9q1f14ea94mpy5xljx4ykv0ihkknfrsb37rerx788tkvmkw4se1cswgrti0axmu4t6377jzpknxfzvpdbli0fsr1l3nsf9nh5i1l76nu2fofvsps7zdqgevc05pxuevmnq8a40f94ofef2si26tehpf99awi2uye18vh8wic3u0xpezcrc7fqreeqsiwvz1labhlzr23nag0qf4q3r1fec4t0p9jm56b1nvxs3baq9zzh5gvgjh62a0tierj6683ef23hjshxanexrjxxduigyc8 == \i\7\c\1\7\a\z\e\8\v\x\0\j\1\r\r\9\k\q\l\o\9\q\9\i\f\b\5\7\v\c\r\u\9\t\p\s\g\p\l\7\e\7\x\w\z\s\s\9\u\1\m\e\x\b\2\o\q\p\4\v\q\e\l\q\q\q\l\t\1\8\u\n\f\t\f\5\j\k\z\3\n\u\l\k\w\q\j\x\8\w\f\p\g\7\k\l\j\z\h\n\f\f\c\8\l\7\h\8\v\o\v\1\7\c\s\4\c\y\t\9\b\s\6\v\4\k\y\z\g\g\p\f\y\3\c\l\r\h\x\3\8\1\r\t\2\e\o\b\9\s\9\j\r\o\b\2\s\m\n\h\1\x\w\9\7\0\q\y\m\4\7\o\j\0\f\j\b\w\t\t\3\f\3\0\z\l\w\h\5\6\0\1\e\p\s\u\q\q\8\j\t\3\a\i\n\u\8\2\m\0\0\o\9\x\i\x\x\2\m\g\7\s\x\m\9\y\f\3\i\k\i\9\q\1\f\1\4\e\a\9\4\m\p\y\5\x\l\j\x\4\y\k\v\0\i\h\k\k\n\f\r\s\b\3\7\r\e\r\x\7\8\8\t\k\v\m\k\w\4\s\e\1\c\s\w\g\r\t\i\0\a\x\m\u\4\t\6\3\7\7\j\z\p\k\n\x\f\z\v\p\d\b\l\i\0\f\s\r\1\l\3\n\s\f\9\n\h\5\i\1\l\7\6\n\u\2\f\o\f\v\s\p\s\7\z\d\q\g\e\v\c\0\5\p\x\u\e\v\m\n\q\8\a\4\0\f\9\4\o\f\e\f\2\s\i\2\6\t\e\h\p\f\9\9\a\w\i\2\u\y\e\1\8\v\h\8\w\i\c\3\u\0\x\p\e\z\c\r\c\7\f\q\r\e\e\q\s\i\w\v\z\1\l\a\b\h\l\z\r\2\3\n\a\g\0\q\f\4\q\3\r\1\f\e\c\4\t\0\p\9\j\m\5\6\b\1\n\v\x\s\3\b\a\q\9\z\z\h\5\g\v\g\j\h\6\2\a\0\t\i\e\r\j\6\6\8\3\e\f\2\3\h\j\s\h\x\a\n\e\x\r\j\x\x\d\u\i\g\y\c\8 ]] 00:08:19.596 06:32:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.596 06:32:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:19.855 [2024-07-12 06:32:59.523204] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:19.855 [2024-07-12 06:32:59.523288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70158 ] 00:08:19.855 [2024-07-12 06:32:59.656272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.855 [2024-07-12 06:32:59.690450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.115  Copying: 512/512 [B] (average 500 kBps) 00:08:20.115 00:08:20.115 06:32:59 -- dd/posix.sh@93 -- # [[ i7c17aze8vx0j1rr9kqlo9q9ifb57vcru9tpsgpl7e7xwzss9u1mexb2oqp4vqelqqqlt18unftf5jkz3nulkwqjx8wfpg7kljzhnffc8l7h8vov17cs4cyt9bs6v4kyzggpfy3clrhx381rt2eob9s9jrob2smnh1xw970qym47oj0fjbwtt3f30zlwh5601epsuqq8jt3ainu82m00o9xixx2mg7sxm9yf3iki9q1f14ea94mpy5xljx4ykv0ihkknfrsb37rerx788tkvmkw4se1cswgrti0axmu4t6377jzpknxfzvpdbli0fsr1l3nsf9nh5i1l76nu2fofvsps7zdqgevc05pxuevmnq8a40f94ofef2si26tehpf99awi2uye18vh8wic3u0xpezcrc7fqreeqsiwvz1labhlzr23nag0qf4q3r1fec4t0p9jm56b1nvxs3baq9zzh5gvgjh62a0tierj6683ef23hjshxanexrjxxduigyc8 == \i\7\c\1\7\a\z\e\8\v\x\0\j\1\r\r\9\k\q\l\o\9\q\9\i\f\b\5\7\v\c\r\u\9\t\p\s\g\p\l\7\e\7\x\w\z\s\s\9\u\1\m\e\x\b\2\o\q\p\4\v\q\e\l\q\q\q\l\t\1\8\u\n\f\t\f\5\j\k\z\3\n\u\l\k\w\q\j\x\8\w\f\p\g\7\k\l\j\z\h\n\f\f\c\8\l\7\h\8\v\o\v\1\7\c\s\4\c\y\t\9\b\s\6\v\4\k\y\z\g\g\p\f\y\3\c\l\r\h\x\3\8\1\r\t\2\e\o\b\9\s\9\j\r\o\b\2\s\m\n\h\1\x\w\9\7\0\q\y\m\4\7\o\j\0\f\j\b\w\t\t\3\f\3\0\z\l\w\h\5\6\0\1\e\p\s\u\q\q\8\j\t\3\a\i\n\u\8\2\m\0\0\o\9\x\i\x\x\2\m\g\7\s\x\m\9\y\f\3\i\k\i\9\q\1\f\1\4\e\a\9\4\m\p\y\5\x\l\j\x\4\y\k\v\0\i\h\k\k\n\f\r\s\b\3\7\r\e\r\x\7\8\8\t\k\v\m\k\w\4\s\e\1\c\s\w\g\r\t\i\0\a\x\m\u\4\t\6\3\7\7\j\z\p\k\n\x\f\z\v\p\d\b\l\i\0\f\s\r\1\l\3\n\s\f\9\n\h\5\i\1\l\7\6\n\u\2\f\o\f\v\s\p\s\7\z\d\q\g\e\v\c\0\5\p\x\u\e\v\m\n\q\8\a\4\0\f\9\4\o\f\e\f\2\s\i\2\6\t\e\h\p\f\9\9\a\w\i\2\u\y\e\1\8\v\h\8\w\i\c\3\u\0\x\p\e\z\c\r\c\7\f\q\r\e\e\q\s\i\w\v\z\1\l\a\b\h\l\z\r\2\3\n\a\g\0\q\f\4\q\3\r\1\f\e\c\4\t\0\p\9\j\m\5\6\b\1\n\v\x\s\3\b\a\q\9\z\z\h\5\g\v\g\j\h\6\2\a\0\t\i\e\r\j\6\6\8\3\e\f\2\3\h\j\s\h\x\a\n\e\x\r\j\x\x\d\u\i\g\y\c\8 ]] 00:08:20.115 00:08:20.115 real 0m3.462s 00:08:20.115 user 0m1.696s 00:08:20.115 sys 0m0.781s 00:08:20.115 06:32:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.115 06:32:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.115 ************************************ 00:08:20.115 END TEST dd_flags_misc 00:08:20.115 ************************************ 00:08:20.115 06:32:59 -- dd/posix.sh@131 -- # tests_forced_aio 00:08:20.115 06:32:59 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:20.115 * Second test run, disabling liburing, forcing AIO 00:08:20.115 06:32:59 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:20.115 06:32:59 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:20.115 06:32:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.115 06:32:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.115 06:32:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.115 ************************************ 00:08:20.115 START TEST dd_flag_append_forced_aio 00:08:20.115 ************************************ 00:08:20.115 06:32:59 -- common/autotest_common.sh@1104 -- # append 00:08:20.115 06:32:59 -- dd/posix.sh@16 -- # local dump0 00:08:20.115 06:32:59 -- dd/posix.sh@17 -- # local dump1 00:08:20.115 06:32:59 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:20.115 06:32:59 -- dd/common.sh@98 -- # xtrace_disable 00:08:20.115 06:32:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.115 06:32:59 -- dd/posix.sh@19 -- # dump0=zylss0sxy3zkfqcfdxk82iki9ih2mdn0 00:08:20.115 06:32:59 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:20.115 06:32:59 -- dd/common.sh@98 -- # xtrace_disable 00:08:20.115 06:32:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.115 06:32:59 -- dd/posix.sh@20 -- # dump1=driydmllkxgwhamjhphg2ffafntvlgae 00:08:20.115 06:32:59 -- dd/posix.sh@22 -- # printf %s zylss0sxy3zkfqcfdxk82iki9ih2mdn0 00:08:20.115 06:32:59 -- dd/posix.sh@23 -- # printf %s driydmllkxgwhamjhphg2ffafntvlgae 00:08:20.115 06:32:59 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:20.115 [2024-07-12 06:32:59.986561] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:20.115 [2024-07-12 06:32:59.986673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70190 ] 00:08:20.373 [2024-07-12 06:33:00.119740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.373 [2024-07-12 06:33:00.153841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.631  Copying: 32/32 [B] (average 31 kBps) 00:08:20.631 00:08:20.631 06:33:00 -- dd/posix.sh@27 -- # [[ driydmllkxgwhamjhphg2ffafntvlgaezylss0sxy3zkfqcfdxk82iki9ih2mdn0 == \d\r\i\y\d\m\l\l\k\x\g\w\h\a\m\j\h\p\h\g\2\f\f\a\f\n\t\v\l\g\a\e\z\y\l\s\s\0\s\x\y\3\z\k\f\q\c\f\d\x\k\8\2\i\k\i\9\i\h\2\m\d\n\0 ]] 00:08:20.631 00:08:20.631 real 0m0.417s 00:08:20.631 user 0m0.199s 00:08:20.631 sys 0m0.092s 00:08:20.631 06:33:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.631 06:33:00 -- common/autotest_common.sh@10 -- # set +x 00:08:20.631 ************************************ 00:08:20.631 END TEST dd_flag_append_forced_aio 00:08:20.631 ************************************ 00:08:20.631 06:33:00 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:20.631 06:33:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.631 06:33:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.631 06:33:00 -- common/autotest_common.sh@10 -- # set +x 00:08:20.631 ************************************ 00:08:20.631 START TEST dd_flag_directory_forced_aio 00:08:20.631 ************************************ 00:08:20.631 06:33:00 -- common/autotest_common.sh@1104 -- # directory 00:08:20.632 06:33:00 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:20.632 06:33:00 -- common/autotest_common.sh@640 -- # local es=0 00:08:20.632 06:33:00 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:20.632 06:33:00 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.632 06:33:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:20.632 06:33:00 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.632 06:33:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:20.632 06:33:00 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.632 06:33:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:20.632 06:33:00 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.632 06:33:00 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.632 06:33:00 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:20.632 [2024-07-12 06:33:00.441457] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:20.632 [2024-07-12 06:33:00.441545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70211 ] 00:08:20.890 [2024-07-12 06:33:00.577622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.890 [2024-07-12 06:33:00.611721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.890 [2024-07-12 06:33:00.655332] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:20.890 [2024-07-12 06:33:00.655393] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:20.890 [2024-07-12 06:33:00.655407] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.890 [2024-07-12 06:33:00.714822] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:20.890 06:33:00 -- common/autotest_common.sh@643 -- # es=236 00:08:20.890 06:33:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:20.890 06:33:00 -- common/autotest_common.sh@652 -- # es=108 00:08:20.890 06:33:00 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:20.890 06:33:00 -- common/autotest_common.sh@660 -- # es=1 00:08:20.890 06:33:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:20.890 06:33:00 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:20.890 06:33:00 -- common/autotest_common.sh@640 -- # local es=0 00:08:20.890 06:33:00 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:20.890 06:33:00 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.890 06:33:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:20.890 06:33:00 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.890 06:33:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:20.890 06:33:00 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.890 06:33:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:20.890 06:33:00 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.890 06:33:00 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.890 06:33:00 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:21.147 [2024-07-12 06:33:00.845806] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:21.147 [2024-07-12 06:33:00.845936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70220 ] 00:08:21.147 [2024-07-12 06:33:00.991423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.147 [2024-07-12 06:33:01.036405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.404 [2024-07-12 06:33:01.086255] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:21.404 [2024-07-12 06:33:01.086318] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:21.404 [2024-07-12 06:33:01.086334] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.404 [2024-07-12 06:33:01.151509] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:21.404 06:33:01 -- common/autotest_common.sh@643 -- # es=236 00:08:21.404 06:33:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:21.404 06:33:01 -- common/autotest_common.sh@652 -- # es=108 00:08:21.404 06:33:01 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:21.404 06:33:01 -- common/autotest_common.sh@660 -- # es=1 00:08:21.404 06:33:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:21.404 00:08:21.404 real 0m0.828s 00:08:21.404 user 0m0.413s 00:08:21.404 sys 0m0.205s 00:08:21.404 06:33:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.404 06:33:01 -- common/autotest_common.sh@10 -- # set +x 00:08:21.404 ************************************ 00:08:21.404 END TEST dd_flag_directory_forced_aio 00:08:21.404 ************************************ 00:08:21.404 06:33:01 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:21.404 06:33:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:21.404 06:33:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.404 06:33:01 -- common/autotest_common.sh@10 -- # set +x 00:08:21.404 ************************************ 00:08:21.404 START TEST dd_flag_nofollow_forced_aio 00:08:21.404 ************************************ 00:08:21.404 06:33:01 -- common/autotest_common.sh@1104 -- # nofollow 00:08:21.404 06:33:01 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:21.404 06:33:01 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:21.404 06:33:01 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:21.404 06:33:01 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:21.404 06:33:01 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.404 06:33:01 -- common/autotest_common.sh@640 -- # local es=0 00:08:21.404 06:33:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.404 06:33:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.404 06:33:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.404 06:33:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.404 06:33:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.404 06:33:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.404 06:33:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.404 06:33:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.404 06:33:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:21.404 06:33:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.662 [2024-07-12 06:33:01.331677] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:21.662 [2024-07-12 06:33:01.331806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70249 ] 00:08:21.662 [2024-07-12 06:33:01.478500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.662 [2024-07-12 06:33:01.519627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.662 [2024-07-12 06:33:01.570515] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:21.662 [2024-07-12 06:33:01.570581] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:21.662 [2024-07-12 06:33:01.570600] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.920 [2024-07-12 06:33:01.638601] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:21.920 06:33:01 -- common/autotest_common.sh@643 -- # es=216 00:08:21.920 06:33:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:21.920 06:33:01 -- common/autotest_common.sh@652 -- # es=88 00:08:21.920 06:33:01 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:21.920 06:33:01 -- common/autotest_common.sh@660 -- # es=1 00:08:21.920 06:33:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:21.920 06:33:01 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:21.920 06:33:01 -- common/autotest_common.sh@640 -- # local es=0 00:08:21.920 06:33:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:21.920 06:33:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.920 06:33:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.920 06:33:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.920 06:33:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.920 06:33:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.920 06:33:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.920 06:33:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.920 06:33:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:21.920 06:33:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:21.920 [2024-07-12 06:33:01.762880] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:21.920 [2024-07-12 06:33:01.763000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70258 ] 00:08:22.179 [2024-07-12 06:33:01.901930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.179 [2024-07-12 06:33:01.942072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.179 [2024-07-12 06:33:01.992412] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:22.179 [2024-07-12 06:33:01.992477] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:22.179 [2024-07-12 06:33:01.992497] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.179 [2024-07-12 06:33:02.058116] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:22.437 06:33:02 -- common/autotest_common.sh@643 -- # es=216 00:08:22.437 06:33:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:22.437 06:33:02 -- common/autotest_common.sh@652 -- # es=88 00:08:22.437 06:33:02 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:22.437 06:33:02 -- common/autotest_common.sh@660 -- # es=1 00:08:22.437 06:33:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:22.437 06:33:02 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:22.437 06:33:02 -- dd/common.sh@98 -- # xtrace_disable 00:08:22.437 06:33:02 -- common/autotest_common.sh@10 -- # set +x 00:08:22.437 06:33:02 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.437 [2024-07-12 06:33:02.197983] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:22.437 [2024-07-12 06:33:02.198125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70266 ] 00:08:22.437 [2024-07-12 06:33:02.337270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.695 [2024-07-12 06:33:02.372780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.695  Copying: 512/512 [B] (average 500 kBps) 00:08:22.695 00:08:22.695 06:33:02 -- dd/posix.sh@49 -- # [[ txbeoxmyha90saqem2vyhjoiikj6t1edfdrcp6gthy4859jm3a8ho3ck8ekuwsnzmzstevy2831lygv3zpxu64rtefbjir4d0dy1dpdhg7spcx6iksc7g8v8cw8r2pggp6nkgc8kydpujucoabqa1nytkwstyocfcj40e13tjawo5n0g6unkmyj15bcd6kuch0ehuuomt7104ax5t9qhudjbr10eprqdjx965ru3ous1xq9r7nd393h8g2p0un9kqznvjam1i3uvio4tl5h3h61oj2b7aqgppatx5n8okchxocjy4a42tpctjzakqzoc1lh6yrr233nqx7xt52yddzrimy70rfwgcg2lx4ri4tvb7kt0kx1sjdj04qylhrrip24hscq1293l4ke9hq6nai7mbnun34ja6ux9dy42b7wq6ezztgvgsmb0vzl8a7n6br7r3rxb80jdm8zb92uw7azgsm7964dh7zwkd4nbvb7u2nojsyae06mtmnptjyhb == \t\x\b\e\o\x\m\y\h\a\9\0\s\a\q\e\m\2\v\y\h\j\o\i\i\k\j\6\t\1\e\d\f\d\r\c\p\6\g\t\h\y\4\8\5\9\j\m\3\a\8\h\o\3\c\k\8\e\k\u\w\s\n\z\m\z\s\t\e\v\y\2\8\3\1\l\y\g\v\3\z\p\x\u\6\4\r\t\e\f\b\j\i\r\4\d\0\d\y\1\d\p\d\h\g\7\s\p\c\x\6\i\k\s\c\7\g\8\v\8\c\w\8\r\2\p\g\g\p\6\n\k\g\c\8\k\y\d\p\u\j\u\c\o\a\b\q\a\1\n\y\t\k\w\s\t\y\o\c\f\c\j\4\0\e\1\3\t\j\a\w\o\5\n\0\g\6\u\n\k\m\y\j\1\5\b\c\d\6\k\u\c\h\0\e\h\u\u\o\m\t\7\1\0\4\a\x\5\t\9\q\h\u\d\j\b\r\1\0\e\p\r\q\d\j\x\9\6\5\r\u\3\o\u\s\1\x\q\9\r\7\n\d\3\9\3\h\8\g\2\p\0\u\n\9\k\q\z\n\v\j\a\m\1\i\3\u\v\i\o\4\t\l\5\h\3\h\6\1\o\j\2\b\7\a\q\g\p\p\a\t\x\5\n\8\o\k\c\h\x\o\c\j\y\4\a\4\2\t\p\c\t\j\z\a\k\q\z\o\c\1\l\h\6\y\r\r\2\3\3\n\q\x\7\x\t\5\2\y\d\d\z\r\i\m\y\7\0\r\f\w\g\c\g\2\l\x\4\r\i\4\t\v\b\7\k\t\0\k\x\1\s\j\d\j\0\4\q\y\l\h\r\r\i\p\2\4\h\s\c\q\1\2\9\3\l\4\k\e\9\h\q\6\n\a\i\7\m\b\n\u\n\3\4\j\a\6\u\x\9\d\y\4\2\b\7\w\q\6\e\z\z\t\g\v\g\s\m\b\0\v\z\l\8\a\7\n\6\b\r\7\r\3\r\x\b\8\0\j\d\m\8\z\b\9\2\u\w\7\a\z\g\s\m\7\9\6\4\d\h\7\z\w\k\d\4\n\b\v\b\7\u\2\n\o\j\s\y\a\e\0\6\m\t\m\n\p\t\j\y\h\b ]] 00:08:22.695 00:08:22.695 real 0m1.308s 00:08:22.695 user 0m0.672s 00:08:22.695 sys 0m0.306s 00:08:22.695 06:33:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.695 06:33:02 -- common/autotest_common.sh@10 -- # set +x 00:08:22.695 ************************************ 00:08:22.695 END TEST dd_flag_nofollow_forced_aio 00:08:22.695 ************************************ 00:08:22.695 06:33:02 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:22.695 06:33:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:22.695 06:33:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.695 06:33:02 -- common/autotest_common.sh@10 -- # set +x 00:08:22.954 ************************************ 00:08:22.954 START TEST dd_flag_noatime_forced_aio 00:08:22.954 ************************************ 00:08:22.954 06:33:02 -- common/autotest_common.sh@1104 -- # noatime 00:08:22.954 06:33:02 -- dd/posix.sh@53 -- # local atime_if 00:08:22.954 06:33:02 -- dd/posix.sh@54 -- # local atime_of 00:08:22.954 06:33:02 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:22.954 06:33:02 -- dd/common.sh@98 -- # xtrace_disable 00:08:22.954 06:33:02 -- common/autotest_common.sh@10 -- # set +x 00:08:22.954 06:33:02 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:22.954 06:33:02 -- dd/posix.sh@60 -- # atime_if=1720765982 00:08:22.954 06:33:02 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.954 06:33:02 -- dd/posix.sh@61 -- # atime_of=1720765982 00:08:22.954 06:33:02 -- dd/posix.sh@66 -- # sleep 1 00:08:23.887 06:33:03 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.887 [2024-07-12 06:33:03.686082] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:23.887 [2024-07-12 06:33:03.686180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70301 ] 00:08:24.145 [2024-07-12 06:33:03.818023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.145 [2024-07-12 06:33:03.856711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.145  Copying: 512/512 [B] (average 500 kBps) 00:08:24.145 00:08:24.403 06:33:04 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.403 06:33:04 -- dd/posix.sh@69 -- # (( atime_if == 1720765982 )) 00:08:24.403 06:33:04 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.403 06:33:04 -- dd/posix.sh@70 -- # (( atime_of == 1720765982 )) 00:08:24.403 06:33:04 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.403 [2024-07-12 06:33:04.133180] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:24.403 [2024-07-12 06:33:04.133315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70318 ] 00:08:24.403 [2024-07-12 06:33:04.278043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.403 [2024-07-12 06:33:04.312852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.660  Copying: 512/512 [B] (average 500 kBps) 00:08:24.660 00:08:24.660 06:33:04 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.660 06:33:04 -- dd/posix.sh@73 -- # (( atime_if < 1720765984 )) 00:08:24.660 00:08:24.660 real 0m1.903s 00:08:24.660 user 0m0.444s 00:08:24.660 sys 0m0.207s 00:08:24.660 06:33:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.660 06:33:04 -- common/autotest_common.sh@10 -- # set +x 00:08:24.660 ************************************ 00:08:24.660 END TEST dd_flag_noatime_forced_aio 00:08:24.660 ************************************ 00:08:24.660 06:33:04 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:24.660 06:33:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:24.660 06:33:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.660 06:33:04 -- common/autotest_common.sh@10 -- # set +x 00:08:24.660 ************************************ 00:08:24.660 START TEST dd_flags_misc_forced_aio 00:08:24.660 ************************************ 00:08:24.660 06:33:04 -- common/autotest_common.sh@1104 -- # io 00:08:24.660 06:33:04 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:24.660 06:33:04 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:24.660 06:33:04 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:24.660 06:33:04 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:24.660 06:33:04 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:24.660 06:33:04 -- dd/common.sh@98 -- # xtrace_disable 00:08:24.660 06:33:04 -- common/autotest_common.sh@10 -- # set +x 00:08:24.660 06:33:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.660 06:33:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:24.918 [2024-07-12 06:33:04.616474] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:24.918 [2024-07-12 06:33:04.616565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70339 ] 00:08:24.918 [2024-07-12 06:33:04.751429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.918 [2024-07-12 06:33:04.786293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.177  Copying: 512/512 [B] (average 500 kBps) 00:08:25.177 00:08:25.177 06:33:04 -- dd/posix.sh@93 -- # [[ 6hzblhho3r2jsyut45sycmo2au1dwkh3lhcjavnkdgjydyis7i65selzayujdvqr3b2saw0k6bqriqotipfrefo7bnw3p5bo5ynitgbu6vnf83ltm0aju1pczjpq75immaa4im7u9u9jdycyght3v3lcqs5b8e67njybrpket2ljt5635nubkhfg4vwidy8cg2g24fznj96936hjy6hl5k1ggafl37opt8o9gz9v80sccw8ge20chbn7av3ba94uviwtoepdmadb8htvoxnvhcwurjccpayy4l65avgty0t0rhpm589lvq34cqqv4q5rwevgbu5mb5lo21pth1fy0c225qei776roy73khl3x7xkj3a8hte0jl0gb9cxlgt8pmbrr82md4zvut9vhd6k7g3yl8zwqo8ro2h3tdse5spo196sv5jmjqvla3p1z7q01vkp6q6wa30cjm8c7uojqrpycd7kyr5uh1x1auat9qf9gi1kdgfr9m5v6tdtqi95 == \6\h\z\b\l\h\h\o\3\r\2\j\s\y\u\t\4\5\s\y\c\m\o\2\a\u\1\d\w\k\h\3\l\h\c\j\a\v\n\k\d\g\j\y\d\y\i\s\7\i\6\5\s\e\l\z\a\y\u\j\d\v\q\r\3\b\2\s\a\w\0\k\6\b\q\r\i\q\o\t\i\p\f\r\e\f\o\7\b\n\w\3\p\5\b\o\5\y\n\i\t\g\b\u\6\v\n\f\8\3\l\t\m\0\a\j\u\1\p\c\z\j\p\q\7\5\i\m\m\a\a\4\i\m\7\u\9\u\9\j\d\y\c\y\g\h\t\3\v\3\l\c\q\s\5\b\8\e\6\7\n\j\y\b\r\p\k\e\t\2\l\j\t\5\6\3\5\n\u\b\k\h\f\g\4\v\w\i\d\y\8\c\g\2\g\2\4\f\z\n\j\9\6\9\3\6\h\j\y\6\h\l\5\k\1\g\g\a\f\l\3\7\o\p\t\8\o\9\g\z\9\v\8\0\s\c\c\w\8\g\e\2\0\c\h\b\n\7\a\v\3\b\a\9\4\u\v\i\w\t\o\e\p\d\m\a\d\b\8\h\t\v\o\x\n\v\h\c\w\u\r\j\c\c\p\a\y\y\4\l\6\5\a\v\g\t\y\0\t\0\r\h\p\m\5\8\9\l\v\q\3\4\c\q\q\v\4\q\5\r\w\e\v\g\b\u\5\m\b\5\l\o\2\1\p\t\h\1\f\y\0\c\2\2\5\q\e\i\7\7\6\r\o\y\7\3\k\h\l\3\x\7\x\k\j\3\a\8\h\t\e\0\j\l\0\g\b\9\c\x\l\g\t\8\p\m\b\r\r\8\2\m\d\4\z\v\u\t\9\v\h\d\6\k\7\g\3\y\l\8\z\w\q\o\8\r\o\2\h\3\t\d\s\e\5\s\p\o\1\9\6\s\v\5\j\m\j\q\v\l\a\3\p\1\z\7\q\0\1\v\k\p\6\q\6\w\a\3\0\c\j\m\8\c\7\u\o\j\q\r\p\y\c\d\7\k\y\r\5\u\h\1\x\1\a\u\a\t\9\q\f\9\g\i\1\k\d\g\f\r\9\m\5\v\6\t\d\t\q\i\9\5 ]] 00:08:25.177 06:33:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.177 06:33:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:25.177 [2024-07-12 06:33:05.015798] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:25.177 [2024-07-12 06:33:05.015896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70352 ] 00:08:25.435 [2024-07-12 06:33:05.149102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.435 [2024-07-12 06:33:05.183556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.695  Copying: 512/512 [B] (average 500 kBps) 00:08:25.695 00:08:25.695 06:33:05 -- dd/posix.sh@93 -- # [[ 6hzblhho3r2jsyut45sycmo2au1dwkh3lhcjavnkdgjydyis7i65selzayujdvqr3b2saw0k6bqriqotipfrefo7bnw3p5bo5ynitgbu6vnf83ltm0aju1pczjpq75immaa4im7u9u9jdycyght3v3lcqs5b8e67njybrpket2ljt5635nubkhfg4vwidy8cg2g24fznj96936hjy6hl5k1ggafl37opt8o9gz9v80sccw8ge20chbn7av3ba94uviwtoepdmadb8htvoxnvhcwurjccpayy4l65avgty0t0rhpm589lvq34cqqv4q5rwevgbu5mb5lo21pth1fy0c225qei776roy73khl3x7xkj3a8hte0jl0gb9cxlgt8pmbrr82md4zvut9vhd6k7g3yl8zwqo8ro2h3tdse5spo196sv5jmjqvla3p1z7q01vkp6q6wa30cjm8c7uojqrpycd7kyr5uh1x1auat9qf9gi1kdgfr9m5v6tdtqi95 == \6\h\z\b\l\h\h\o\3\r\2\j\s\y\u\t\4\5\s\y\c\m\o\2\a\u\1\d\w\k\h\3\l\h\c\j\a\v\n\k\d\g\j\y\d\y\i\s\7\i\6\5\s\e\l\z\a\y\u\j\d\v\q\r\3\b\2\s\a\w\0\k\6\b\q\r\i\q\o\t\i\p\f\r\e\f\o\7\b\n\w\3\p\5\b\o\5\y\n\i\t\g\b\u\6\v\n\f\8\3\l\t\m\0\a\j\u\1\p\c\z\j\p\q\7\5\i\m\m\a\a\4\i\m\7\u\9\u\9\j\d\y\c\y\g\h\t\3\v\3\l\c\q\s\5\b\8\e\6\7\n\j\y\b\r\p\k\e\t\2\l\j\t\5\6\3\5\n\u\b\k\h\f\g\4\v\w\i\d\y\8\c\g\2\g\2\4\f\z\n\j\9\6\9\3\6\h\j\y\6\h\l\5\k\1\g\g\a\f\l\3\7\o\p\t\8\o\9\g\z\9\v\8\0\s\c\c\w\8\g\e\2\0\c\h\b\n\7\a\v\3\b\a\9\4\u\v\i\w\t\o\e\p\d\m\a\d\b\8\h\t\v\o\x\n\v\h\c\w\u\r\j\c\c\p\a\y\y\4\l\6\5\a\v\g\t\y\0\t\0\r\h\p\m\5\8\9\l\v\q\3\4\c\q\q\v\4\q\5\r\w\e\v\g\b\u\5\m\b\5\l\o\2\1\p\t\h\1\f\y\0\c\2\2\5\q\e\i\7\7\6\r\o\y\7\3\k\h\l\3\x\7\x\k\j\3\a\8\h\t\e\0\j\l\0\g\b\9\c\x\l\g\t\8\p\m\b\r\r\8\2\m\d\4\z\v\u\t\9\v\h\d\6\k\7\g\3\y\l\8\z\w\q\o\8\r\o\2\h\3\t\d\s\e\5\s\p\o\1\9\6\s\v\5\j\m\j\q\v\l\a\3\p\1\z\7\q\0\1\v\k\p\6\q\6\w\a\3\0\c\j\m\8\c\7\u\o\j\q\r\p\y\c\d\7\k\y\r\5\u\h\1\x\1\a\u\a\t\9\q\f\9\g\i\1\k\d\g\f\r\9\m\5\v\6\t\d\t\q\i\9\5 ]] 00:08:25.695 06:33:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.695 06:33:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:25.695 [2024-07-12 06:33:05.437283] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:25.695 [2024-07-12 06:33:05.437382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70354 ] 00:08:25.695 [2024-07-12 06:33:05.569676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.695 [2024-07-12 06:33:05.611601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.954  Copying: 512/512 [B] (average 166 kBps) 00:08:25.954 00:08:25.954 06:33:05 -- dd/posix.sh@93 -- # [[ 6hzblhho3r2jsyut45sycmo2au1dwkh3lhcjavnkdgjydyis7i65selzayujdvqr3b2saw0k6bqriqotipfrefo7bnw3p5bo5ynitgbu6vnf83ltm0aju1pczjpq75immaa4im7u9u9jdycyght3v3lcqs5b8e67njybrpket2ljt5635nubkhfg4vwidy8cg2g24fznj96936hjy6hl5k1ggafl37opt8o9gz9v80sccw8ge20chbn7av3ba94uviwtoepdmadb8htvoxnvhcwurjccpayy4l65avgty0t0rhpm589lvq34cqqv4q5rwevgbu5mb5lo21pth1fy0c225qei776roy73khl3x7xkj3a8hte0jl0gb9cxlgt8pmbrr82md4zvut9vhd6k7g3yl8zwqo8ro2h3tdse5spo196sv5jmjqvla3p1z7q01vkp6q6wa30cjm8c7uojqrpycd7kyr5uh1x1auat9qf9gi1kdgfr9m5v6tdtqi95 == \6\h\z\b\l\h\h\o\3\r\2\j\s\y\u\t\4\5\s\y\c\m\o\2\a\u\1\d\w\k\h\3\l\h\c\j\a\v\n\k\d\g\j\y\d\y\i\s\7\i\6\5\s\e\l\z\a\y\u\j\d\v\q\r\3\b\2\s\a\w\0\k\6\b\q\r\i\q\o\t\i\p\f\r\e\f\o\7\b\n\w\3\p\5\b\o\5\y\n\i\t\g\b\u\6\v\n\f\8\3\l\t\m\0\a\j\u\1\p\c\z\j\p\q\7\5\i\m\m\a\a\4\i\m\7\u\9\u\9\j\d\y\c\y\g\h\t\3\v\3\l\c\q\s\5\b\8\e\6\7\n\j\y\b\r\p\k\e\t\2\l\j\t\5\6\3\5\n\u\b\k\h\f\g\4\v\w\i\d\y\8\c\g\2\g\2\4\f\z\n\j\9\6\9\3\6\h\j\y\6\h\l\5\k\1\g\g\a\f\l\3\7\o\p\t\8\o\9\g\z\9\v\8\0\s\c\c\w\8\g\e\2\0\c\h\b\n\7\a\v\3\b\a\9\4\u\v\i\w\t\o\e\p\d\m\a\d\b\8\h\t\v\o\x\n\v\h\c\w\u\r\j\c\c\p\a\y\y\4\l\6\5\a\v\g\t\y\0\t\0\r\h\p\m\5\8\9\l\v\q\3\4\c\q\q\v\4\q\5\r\w\e\v\g\b\u\5\m\b\5\l\o\2\1\p\t\h\1\f\y\0\c\2\2\5\q\e\i\7\7\6\r\o\y\7\3\k\h\l\3\x\7\x\k\j\3\a\8\h\t\e\0\j\l\0\g\b\9\c\x\l\g\t\8\p\m\b\r\r\8\2\m\d\4\z\v\u\t\9\v\h\d\6\k\7\g\3\y\l\8\z\w\q\o\8\r\o\2\h\3\t\d\s\e\5\s\p\o\1\9\6\s\v\5\j\m\j\q\v\l\a\3\p\1\z\7\q\0\1\v\k\p\6\q\6\w\a\3\0\c\j\m\8\c\7\u\o\j\q\r\p\y\c\d\7\k\y\r\5\u\h\1\x\1\a\u\a\t\9\q\f\9\g\i\1\k\d\g\f\r\9\m\5\v\6\t\d\t\q\i\9\5 ]] 00:08:25.954 06:33:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.954 06:33:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:26.213 [2024-07-12 06:33:05.877759] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:26.213 [2024-07-12 06:33:05.877866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70357 ] 00:08:26.213 [2024-07-12 06:33:06.010053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.213 [2024-07-12 06:33:06.048587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.472  Copying: 512/512 [B] (average 500 kBps) 00:08:26.472 00:08:26.472 06:33:06 -- dd/posix.sh@93 -- # [[ 6hzblhho3r2jsyut45sycmo2au1dwkh3lhcjavnkdgjydyis7i65selzayujdvqr3b2saw0k6bqriqotipfrefo7bnw3p5bo5ynitgbu6vnf83ltm0aju1pczjpq75immaa4im7u9u9jdycyght3v3lcqs5b8e67njybrpket2ljt5635nubkhfg4vwidy8cg2g24fznj96936hjy6hl5k1ggafl37opt8o9gz9v80sccw8ge20chbn7av3ba94uviwtoepdmadb8htvoxnvhcwurjccpayy4l65avgty0t0rhpm589lvq34cqqv4q5rwevgbu5mb5lo21pth1fy0c225qei776roy73khl3x7xkj3a8hte0jl0gb9cxlgt8pmbrr82md4zvut9vhd6k7g3yl8zwqo8ro2h3tdse5spo196sv5jmjqvla3p1z7q01vkp6q6wa30cjm8c7uojqrpycd7kyr5uh1x1auat9qf9gi1kdgfr9m5v6tdtqi95 == \6\h\z\b\l\h\h\o\3\r\2\j\s\y\u\t\4\5\s\y\c\m\o\2\a\u\1\d\w\k\h\3\l\h\c\j\a\v\n\k\d\g\j\y\d\y\i\s\7\i\6\5\s\e\l\z\a\y\u\j\d\v\q\r\3\b\2\s\a\w\0\k\6\b\q\r\i\q\o\t\i\p\f\r\e\f\o\7\b\n\w\3\p\5\b\o\5\y\n\i\t\g\b\u\6\v\n\f\8\3\l\t\m\0\a\j\u\1\p\c\z\j\p\q\7\5\i\m\m\a\a\4\i\m\7\u\9\u\9\j\d\y\c\y\g\h\t\3\v\3\l\c\q\s\5\b\8\e\6\7\n\j\y\b\r\p\k\e\t\2\l\j\t\5\6\3\5\n\u\b\k\h\f\g\4\v\w\i\d\y\8\c\g\2\g\2\4\f\z\n\j\9\6\9\3\6\h\j\y\6\h\l\5\k\1\g\g\a\f\l\3\7\o\p\t\8\o\9\g\z\9\v\8\0\s\c\c\w\8\g\e\2\0\c\h\b\n\7\a\v\3\b\a\9\4\u\v\i\w\t\o\e\p\d\m\a\d\b\8\h\t\v\o\x\n\v\h\c\w\u\r\j\c\c\p\a\y\y\4\l\6\5\a\v\g\t\y\0\t\0\r\h\p\m\5\8\9\l\v\q\3\4\c\q\q\v\4\q\5\r\w\e\v\g\b\u\5\m\b\5\l\o\2\1\p\t\h\1\f\y\0\c\2\2\5\q\e\i\7\7\6\r\o\y\7\3\k\h\l\3\x\7\x\k\j\3\a\8\h\t\e\0\j\l\0\g\b\9\c\x\l\g\t\8\p\m\b\r\r\8\2\m\d\4\z\v\u\t\9\v\h\d\6\k\7\g\3\y\l\8\z\w\q\o\8\r\o\2\h\3\t\d\s\e\5\s\p\o\1\9\6\s\v\5\j\m\j\q\v\l\a\3\p\1\z\7\q\0\1\v\k\p\6\q\6\w\a\3\0\c\j\m\8\c\7\u\o\j\q\r\p\y\c\d\7\k\y\r\5\u\h\1\x\1\a\u\a\t\9\q\f\9\g\i\1\k\d\g\f\r\9\m\5\v\6\t\d\t\q\i\9\5 ]] 00:08:26.472 06:33:06 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:26.472 06:33:06 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:26.472 06:33:06 -- dd/common.sh@98 -- # xtrace_disable 00:08:26.472 06:33:06 -- common/autotest_common.sh@10 -- # set +x 00:08:26.472 06:33:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.472 06:33:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:26.472 [2024-07-12 06:33:06.320539] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:26.472 [2024-07-12 06:33:06.320645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70369 ] 00:08:26.730 [2024-07-12 06:33:06.457827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.730 [2024-07-12 06:33:06.493788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.990  Copying: 512/512 [B] (average 500 kBps) 00:08:26.990 00:08:26.990 06:33:06 -- dd/posix.sh@93 -- # [[ u6iej3svyw7eq8u5awmh8xct1fo7t62ejleiml30y4qgqc19ma4qbug9640187cu6shiwtx632uvgvfw9427btfl6rpfykv8vs3nfdn5su32d6blm8xem71x29uvyg3e3i81glzwwwtwzhu5ii72ctray6kwigvhbucmy30fob4j8gp5ceg8inh6eactjya0fzeslydv1gvwrg9bdbotvge3ek8mujm4fau6mqzhmg7uci9nfc2g0t56rnm9rpt4rf1vlab31a87st8t43ozr20bsq8nr3ar7sp7gj91uc4n7oichxdw956j7exx4ihmdxqjc0vwiaptih9woknd2t7jptv3mfa4r7assdii45k8o0l9sz48ep1vpyzi8396z896t3aeourizx6qrysjccpzqy32oqhgh9ilw46wjwnqb8782iquc1mtus6d7uo2gxfi57r05dpahd30oog70yxemnd1ddxpi3tv460labzwhgltxcekviqccramkxcs == \u\6\i\e\j\3\s\v\y\w\7\e\q\8\u\5\a\w\m\h\8\x\c\t\1\f\o\7\t\6\2\e\j\l\e\i\m\l\3\0\y\4\q\g\q\c\1\9\m\a\4\q\b\u\g\9\6\4\0\1\8\7\c\u\6\s\h\i\w\t\x\6\3\2\u\v\g\v\f\w\9\4\2\7\b\t\f\l\6\r\p\f\y\k\v\8\v\s\3\n\f\d\n\5\s\u\3\2\d\6\b\l\m\8\x\e\m\7\1\x\2\9\u\v\y\g\3\e\3\i\8\1\g\l\z\w\w\w\t\w\z\h\u\5\i\i\7\2\c\t\r\a\y\6\k\w\i\g\v\h\b\u\c\m\y\3\0\f\o\b\4\j\8\g\p\5\c\e\g\8\i\n\h\6\e\a\c\t\j\y\a\0\f\z\e\s\l\y\d\v\1\g\v\w\r\g\9\b\d\b\o\t\v\g\e\3\e\k\8\m\u\j\m\4\f\a\u\6\m\q\z\h\m\g\7\u\c\i\9\n\f\c\2\g\0\t\5\6\r\n\m\9\r\p\t\4\r\f\1\v\l\a\b\3\1\a\8\7\s\t\8\t\4\3\o\z\r\2\0\b\s\q\8\n\r\3\a\r\7\s\p\7\g\j\9\1\u\c\4\n\7\o\i\c\h\x\d\w\9\5\6\j\7\e\x\x\4\i\h\m\d\x\q\j\c\0\v\w\i\a\p\t\i\h\9\w\o\k\n\d\2\t\7\j\p\t\v\3\m\f\a\4\r\7\a\s\s\d\i\i\4\5\k\8\o\0\l\9\s\z\4\8\e\p\1\v\p\y\z\i\8\3\9\6\z\8\9\6\t\3\a\e\o\u\r\i\z\x\6\q\r\y\s\j\c\c\p\z\q\y\3\2\o\q\h\g\h\9\i\l\w\4\6\w\j\w\n\q\b\8\7\8\2\i\q\u\c\1\m\t\u\s\6\d\7\u\o\2\g\x\f\i\5\7\r\0\5\d\p\a\h\d\3\0\o\o\g\7\0\y\x\e\m\n\d\1\d\d\x\p\i\3\t\v\4\6\0\l\a\b\z\w\h\g\l\t\x\c\e\k\v\i\q\c\c\r\a\m\k\x\c\s ]] 00:08:26.990 06:33:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.990 06:33:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:26.990 [2024-07-12 06:33:06.731298] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:26.990 [2024-07-12 06:33:06.731389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70371 ] 00:08:26.990 [2024-07-12 06:33:06.868594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.990 [2024-07-12 06:33:06.905762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.249  Copying: 512/512 [B] (average 500 kBps) 00:08:27.249 00:08:27.249 06:33:07 -- dd/posix.sh@93 -- # [[ u6iej3svyw7eq8u5awmh8xct1fo7t62ejleiml30y4qgqc19ma4qbug9640187cu6shiwtx632uvgvfw9427btfl6rpfykv8vs3nfdn5su32d6blm8xem71x29uvyg3e3i81glzwwwtwzhu5ii72ctray6kwigvhbucmy30fob4j8gp5ceg8inh6eactjya0fzeslydv1gvwrg9bdbotvge3ek8mujm4fau6mqzhmg7uci9nfc2g0t56rnm9rpt4rf1vlab31a87st8t43ozr20bsq8nr3ar7sp7gj91uc4n7oichxdw956j7exx4ihmdxqjc0vwiaptih9woknd2t7jptv3mfa4r7assdii45k8o0l9sz48ep1vpyzi8396z896t3aeourizx6qrysjccpzqy32oqhgh9ilw46wjwnqb8782iquc1mtus6d7uo2gxfi57r05dpahd30oog70yxemnd1ddxpi3tv460labzwhgltxcekviqccramkxcs == \u\6\i\e\j\3\s\v\y\w\7\e\q\8\u\5\a\w\m\h\8\x\c\t\1\f\o\7\t\6\2\e\j\l\e\i\m\l\3\0\y\4\q\g\q\c\1\9\m\a\4\q\b\u\g\9\6\4\0\1\8\7\c\u\6\s\h\i\w\t\x\6\3\2\u\v\g\v\f\w\9\4\2\7\b\t\f\l\6\r\p\f\y\k\v\8\v\s\3\n\f\d\n\5\s\u\3\2\d\6\b\l\m\8\x\e\m\7\1\x\2\9\u\v\y\g\3\e\3\i\8\1\g\l\z\w\w\w\t\w\z\h\u\5\i\i\7\2\c\t\r\a\y\6\k\w\i\g\v\h\b\u\c\m\y\3\0\f\o\b\4\j\8\g\p\5\c\e\g\8\i\n\h\6\e\a\c\t\j\y\a\0\f\z\e\s\l\y\d\v\1\g\v\w\r\g\9\b\d\b\o\t\v\g\e\3\e\k\8\m\u\j\m\4\f\a\u\6\m\q\z\h\m\g\7\u\c\i\9\n\f\c\2\g\0\t\5\6\r\n\m\9\r\p\t\4\r\f\1\v\l\a\b\3\1\a\8\7\s\t\8\t\4\3\o\z\r\2\0\b\s\q\8\n\r\3\a\r\7\s\p\7\g\j\9\1\u\c\4\n\7\o\i\c\h\x\d\w\9\5\6\j\7\e\x\x\4\i\h\m\d\x\q\j\c\0\v\w\i\a\p\t\i\h\9\w\o\k\n\d\2\t\7\j\p\t\v\3\m\f\a\4\r\7\a\s\s\d\i\i\4\5\k\8\o\0\l\9\s\z\4\8\e\p\1\v\p\y\z\i\8\3\9\6\z\8\9\6\t\3\a\e\o\u\r\i\z\x\6\q\r\y\s\j\c\c\p\z\q\y\3\2\o\q\h\g\h\9\i\l\w\4\6\w\j\w\n\q\b\8\7\8\2\i\q\u\c\1\m\t\u\s\6\d\7\u\o\2\g\x\f\i\5\7\r\0\5\d\p\a\h\d\3\0\o\o\g\7\0\y\x\e\m\n\d\1\d\d\x\p\i\3\t\v\4\6\0\l\a\b\z\w\h\g\l\t\x\c\e\k\v\i\q\c\c\r\a\m\k\x\c\s ]] 00:08:27.249 06:33:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.249 06:33:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:27.508 [2024-07-12 06:33:07.182796] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:27.508 [2024-07-12 06:33:07.182926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70384 ] 00:08:27.508 [2024-07-12 06:33:07.320926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.508 [2024-07-12 06:33:07.360684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.767  Copying: 512/512 [B] (average 500 kBps) 00:08:27.767 00:08:27.767 06:33:07 -- dd/posix.sh@93 -- # [[ u6iej3svyw7eq8u5awmh8xct1fo7t62ejleiml30y4qgqc19ma4qbug9640187cu6shiwtx632uvgvfw9427btfl6rpfykv8vs3nfdn5su32d6blm8xem71x29uvyg3e3i81glzwwwtwzhu5ii72ctray6kwigvhbucmy30fob4j8gp5ceg8inh6eactjya0fzeslydv1gvwrg9bdbotvge3ek8mujm4fau6mqzhmg7uci9nfc2g0t56rnm9rpt4rf1vlab31a87st8t43ozr20bsq8nr3ar7sp7gj91uc4n7oichxdw956j7exx4ihmdxqjc0vwiaptih9woknd2t7jptv3mfa4r7assdii45k8o0l9sz48ep1vpyzi8396z896t3aeourizx6qrysjccpzqy32oqhgh9ilw46wjwnqb8782iquc1mtus6d7uo2gxfi57r05dpahd30oog70yxemnd1ddxpi3tv460labzwhgltxcekviqccramkxcs == \u\6\i\e\j\3\s\v\y\w\7\e\q\8\u\5\a\w\m\h\8\x\c\t\1\f\o\7\t\6\2\e\j\l\e\i\m\l\3\0\y\4\q\g\q\c\1\9\m\a\4\q\b\u\g\9\6\4\0\1\8\7\c\u\6\s\h\i\w\t\x\6\3\2\u\v\g\v\f\w\9\4\2\7\b\t\f\l\6\r\p\f\y\k\v\8\v\s\3\n\f\d\n\5\s\u\3\2\d\6\b\l\m\8\x\e\m\7\1\x\2\9\u\v\y\g\3\e\3\i\8\1\g\l\z\w\w\w\t\w\z\h\u\5\i\i\7\2\c\t\r\a\y\6\k\w\i\g\v\h\b\u\c\m\y\3\0\f\o\b\4\j\8\g\p\5\c\e\g\8\i\n\h\6\e\a\c\t\j\y\a\0\f\z\e\s\l\y\d\v\1\g\v\w\r\g\9\b\d\b\o\t\v\g\e\3\e\k\8\m\u\j\m\4\f\a\u\6\m\q\z\h\m\g\7\u\c\i\9\n\f\c\2\g\0\t\5\6\r\n\m\9\r\p\t\4\r\f\1\v\l\a\b\3\1\a\8\7\s\t\8\t\4\3\o\z\r\2\0\b\s\q\8\n\r\3\a\r\7\s\p\7\g\j\9\1\u\c\4\n\7\o\i\c\h\x\d\w\9\5\6\j\7\e\x\x\4\i\h\m\d\x\q\j\c\0\v\w\i\a\p\t\i\h\9\w\o\k\n\d\2\t\7\j\p\t\v\3\m\f\a\4\r\7\a\s\s\d\i\i\4\5\k\8\o\0\l\9\s\z\4\8\e\p\1\v\p\y\z\i\8\3\9\6\z\8\9\6\t\3\a\e\o\u\r\i\z\x\6\q\r\y\s\j\c\c\p\z\q\y\3\2\o\q\h\g\h\9\i\l\w\4\6\w\j\w\n\q\b\8\7\8\2\i\q\u\c\1\m\t\u\s\6\d\7\u\o\2\g\x\f\i\5\7\r\0\5\d\p\a\h\d\3\0\o\o\g\7\0\y\x\e\m\n\d\1\d\d\x\p\i\3\t\v\4\6\0\l\a\b\z\w\h\g\l\t\x\c\e\k\v\i\q\c\c\r\a\m\k\x\c\s ]] 00:08:27.767 06:33:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.767 06:33:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:27.767 [2024-07-12 06:33:07.603767] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:27.767 [2024-07-12 06:33:07.603888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70386 ] 00:08:28.026 [2024-07-12 06:33:07.749009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.026 [2024-07-12 06:33:07.784460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.285  Copying: 512/512 [B] (average 500 kBps) 00:08:28.285 00:08:28.285 06:33:08 -- dd/posix.sh@93 -- # [[ u6iej3svyw7eq8u5awmh8xct1fo7t62ejleiml30y4qgqc19ma4qbug9640187cu6shiwtx632uvgvfw9427btfl6rpfykv8vs3nfdn5su32d6blm8xem71x29uvyg3e3i81glzwwwtwzhu5ii72ctray6kwigvhbucmy30fob4j8gp5ceg8inh6eactjya0fzeslydv1gvwrg9bdbotvge3ek8mujm4fau6mqzhmg7uci9nfc2g0t56rnm9rpt4rf1vlab31a87st8t43ozr20bsq8nr3ar7sp7gj91uc4n7oichxdw956j7exx4ihmdxqjc0vwiaptih9woknd2t7jptv3mfa4r7assdii45k8o0l9sz48ep1vpyzi8396z896t3aeourizx6qrysjccpzqy32oqhgh9ilw46wjwnqb8782iquc1mtus6d7uo2gxfi57r05dpahd30oog70yxemnd1ddxpi3tv460labzwhgltxcekviqccramkxcs == \u\6\i\e\j\3\s\v\y\w\7\e\q\8\u\5\a\w\m\h\8\x\c\t\1\f\o\7\t\6\2\e\j\l\e\i\m\l\3\0\y\4\q\g\q\c\1\9\m\a\4\q\b\u\g\9\6\4\0\1\8\7\c\u\6\s\h\i\w\t\x\6\3\2\u\v\g\v\f\w\9\4\2\7\b\t\f\l\6\r\p\f\y\k\v\8\v\s\3\n\f\d\n\5\s\u\3\2\d\6\b\l\m\8\x\e\m\7\1\x\2\9\u\v\y\g\3\e\3\i\8\1\g\l\z\w\w\w\t\w\z\h\u\5\i\i\7\2\c\t\r\a\y\6\k\w\i\g\v\h\b\u\c\m\y\3\0\f\o\b\4\j\8\g\p\5\c\e\g\8\i\n\h\6\e\a\c\t\j\y\a\0\f\z\e\s\l\y\d\v\1\g\v\w\r\g\9\b\d\b\o\t\v\g\e\3\e\k\8\m\u\j\m\4\f\a\u\6\m\q\z\h\m\g\7\u\c\i\9\n\f\c\2\g\0\t\5\6\r\n\m\9\r\p\t\4\r\f\1\v\l\a\b\3\1\a\8\7\s\t\8\t\4\3\o\z\r\2\0\b\s\q\8\n\r\3\a\r\7\s\p\7\g\j\9\1\u\c\4\n\7\o\i\c\h\x\d\w\9\5\6\j\7\e\x\x\4\i\h\m\d\x\q\j\c\0\v\w\i\a\p\t\i\h\9\w\o\k\n\d\2\t\7\j\p\t\v\3\m\f\a\4\r\7\a\s\s\d\i\i\4\5\k\8\o\0\l\9\s\z\4\8\e\p\1\v\p\y\z\i\8\3\9\6\z\8\9\6\t\3\a\e\o\u\r\i\z\x\6\q\r\y\s\j\c\c\p\z\q\y\3\2\o\q\h\g\h\9\i\l\w\4\6\w\j\w\n\q\b\8\7\8\2\i\q\u\c\1\m\t\u\s\6\d\7\u\o\2\g\x\f\i\5\7\r\0\5\d\p\a\h\d\3\0\o\o\g\7\0\y\x\e\m\n\d\1\d\d\x\p\i\3\t\v\4\6\0\l\a\b\z\w\h\g\l\t\x\c\e\k\v\i\q\c\c\r\a\m\k\x\c\s ]] 00:08:28.285 00:08:28.285 real 0m3.437s 00:08:28.285 user 0m1.711s 00:08:28.285 sys 0m0.736s 00:08:28.285 06:33:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.285 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:08:28.285 ************************************ 00:08:28.285 END TEST dd_flags_misc_forced_aio 00:08:28.285 ************************************ 00:08:28.285 06:33:08 -- dd/posix.sh@1 -- # cleanup 00:08:28.285 06:33:08 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:28.285 06:33:08 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:28.285 ************************************ 00:08:28.285 END TEST spdk_dd_posix 00:08:28.285 ************************************ 00:08:28.285 00:08:28.285 real 0m16.425s 00:08:28.285 user 0m7.077s 00:08:28.285 sys 0m3.490s 00:08:28.285 06:33:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.285 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:08:28.285 06:33:08 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:28.285 06:33:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:28.285 06:33:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.285 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:08:28.285 ************************************ 00:08:28.285 START TEST spdk_dd_malloc 00:08:28.285 ************************************ 00:08:28.285 06:33:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:28.285 * Looking for test storage... 00:08:28.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:28.285 06:33:08 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.285 06:33:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.285 06:33:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.285 06:33:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.285 06:33:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.285 06:33:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.285 06:33:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.285 06:33:08 -- paths/export.sh@5 -- # export PATH 00:08:28.285 06:33:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.285 06:33:08 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:28.285 06:33:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:28.285 06:33:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.285 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:08:28.285 ************************************ 00:08:28.285 START TEST dd_malloc_copy 00:08:28.285 ************************************ 00:08:28.285 06:33:08 -- common/autotest_common.sh@1104 -- # malloc_copy 00:08:28.285 06:33:08 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:28.285 06:33:08 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:28.286 06:33:08 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:28.286 06:33:08 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:28.286 06:33:08 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:28.286 06:33:08 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:28.286 06:33:08 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:28.286 06:33:08 -- dd/malloc.sh@28 -- # gen_conf 00:08:28.286 06:33:08 -- dd/common.sh@31 -- # xtrace_disable 00:08:28.286 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:08:28.544 [2024-07-12 06:33:08.217872] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:28.544 [2024-07-12 06:33:08.217976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70459 ] 00:08:28.544 { 00:08:28.544 "subsystems": [ 00:08:28.544 { 00:08:28.544 "subsystem": "bdev", 00:08:28.544 "config": [ 00:08:28.544 { 00:08:28.544 "params": { 00:08:28.544 "block_size": 512, 00:08:28.544 "num_blocks": 1048576, 00:08:28.544 "name": "malloc0" 00:08:28.544 }, 00:08:28.544 "method": "bdev_malloc_create" 00:08:28.544 }, 00:08:28.544 { 00:08:28.544 "params": { 00:08:28.544 "block_size": 512, 00:08:28.544 "num_blocks": 1048576, 00:08:28.544 "name": "malloc1" 00:08:28.544 }, 00:08:28.544 "method": "bdev_malloc_create" 00:08:28.544 }, 00:08:28.544 { 00:08:28.544 "method": "bdev_wait_for_examine" 00:08:28.544 } 00:08:28.544 ] 00:08:28.544 } 00:08:28.544 ] 00:08:28.544 } 00:08:28.544 [2024-07-12 06:33:08.355247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.544 [2024-07-12 06:33:08.394551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.702  Copying: 200/512 [MB] (200 MBps) Copying: 404/512 [MB] (203 MBps) Copying: 512/512 [MB] (average 203 MBps) 00:08:31.702 00:08:31.702 06:33:11 -- dd/malloc.sh@33 -- # gen_conf 00:08:31.702 06:33:11 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:31.702 06:33:11 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.702 06:33:11 -- common/autotest_common.sh@10 -- # set +x 00:08:31.702 [2024-07-12 06:33:11.516680] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:31.702 [2024-07-12 06:33:11.516766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70501 ] 00:08:31.702 { 00:08:31.702 "subsystems": [ 00:08:31.702 { 00:08:31.702 "subsystem": "bdev", 00:08:31.702 "config": [ 00:08:31.702 { 00:08:31.702 "params": { 00:08:31.702 "block_size": 512, 00:08:31.702 "num_blocks": 1048576, 00:08:31.702 "name": "malloc0" 00:08:31.702 }, 00:08:31.702 "method": "bdev_malloc_create" 00:08:31.702 }, 00:08:31.702 { 00:08:31.702 "params": { 00:08:31.702 "block_size": 512, 00:08:31.702 "num_blocks": 1048576, 00:08:31.702 "name": "malloc1" 00:08:31.702 }, 00:08:31.702 "method": "bdev_malloc_create" 00:08:31.702 }, 00:08:31.702 { 00:08:31.702 "method": "bdev_wait_for_examine" 00:08:31.702 } 00:08:31.702 ] 00:08:31.702 } 00:08:31.702 ] 00:08:31.702 } 00:08:31.961 [2024-07-12 06:33:11.656242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.961 [2024-07-12 06:33:11.688237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.095  Copying: 201/512 [MB] (201 MBps) Copying: 405/512 [MB] (204 MBps) Copying: 512/512 [MB] (average 200 MBps) 00:08:35.095 00:08:35.095 00:08:35.095 real 0m6.664s 00:08:35.095 user 0m5.963s 00:08:35.095 sys 0m0.522s 00:08:35.095 06:33:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.095 06:33:14 -- common/autotest_common.sh@10 -- # set +x 00:08:35.095 ************************************ 00:08:35.095 END TEST dd_malloc_copy 00:08:35.095 ************************************ 00:08:35.095 00:08:35.095 real 0m6.794s 00:08:35.095 user 0m6.016s 00:08:35.095 sys 0m0.601s 00:08:35.095 06:33:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.095 06:33:14 -- common/autotest_common.sh@10 -- # set +x 00:08:35.095 ************************************ 00:08:35.095 END TEST spdk_dd_malloc 00:08:35.095 ************************************ 00:08:35.095 06:33:14 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:35.095 06:33:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:35.095 06:33:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:35.095 06:33:14 -- common/autotest_common.sh@10 -- # set +x 00:08:35.095 ************************************ 00:08:35.095 START TEST spdk_dd_bdev_to_bdev 00:08:35.095 ************************************ 00:08:35.095 06:33:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:35.095 * Looking for test storage... 00:08:35.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:35.095 06:33:15 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.095 06:33:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.095 06:33:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.095 06:33:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.095 06:33:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.095 06:33:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.095 06:33:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.095 06:33:15 -- paths/export.sh@5 -- # export PATH 00:08:35.096 06:33:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:35.096 06:33:15 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:35.096 06:33:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:35.355 06:33:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:35.355 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:08:35.355 ************************************ 00:08:35.355 START TEST dd_inflate_file 00:08:35.355 ************************************ 00:08:35.355 06:33:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:35.355 [2024-07-12 06:33:15.088805] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:35.355 [2024-07-12 06:33:15.088982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70599 ] 00:08:35.355 [2024-07-12 06:33:15.239834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.614 [2024-07-12 06:33:15.277610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.614  Copying: 64/64 [MB] (average 2000 MBps) 00:08:35.614 00:08:35.614 00:08:35.614 real 0m0.491s 00:08:35.614 user 0m0.223s 00:08:35.614 sys 0m0.137s 00:08:35.614 06:33:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.614 ************************************ 00:08:35.614 END TEST dd_inflate_file 00:08:35.614 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:08:35.614 ************************************ 00:08:35.873 06:33:15 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:35.873 06:33:15 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:35.873 06:33:15 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:35.873 06:33:15 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:35.873 06:33:15 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:35.873 06:33:15 -- dd/common.sh@31 -- # xtrace_disable 00:08:35.873 06:33:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:35.873 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:08:35.873 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:08:35.873 ************************************ 00:08:35.873 START TEST dd_copy_to_out_bdev 00:08:35.873 ************************************ 00:08:35.873 06:33:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:35.873 [2024-07-12 06:33:15.616624] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:35.873 [2024-07-12 06:33:15.616743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70636 ] 00:08:35.873 { 00:08:35.873 "subsystems": [ 00:08:35.873 { 00:08:35.873 "subsystem": "bdev", 00:08:35.873 "config": [ 00:08:35.873 { 00:08:35.873 "params": { 00:08:35.873 "trtype": "pcie", 00:08:35.873 "traddr": "0000:00:06.0", 00:08:35.873 "name": "Nvme0" 00:08:35.873 }, 00:08:35.873 "method": "bdev_nvme_attach_controller" 00:08:35.873 }, 00:08:35.873 { 00:08:35.873 "params": { 00:08:35.873 "trtype": "pcie", 00:08:35.873 "traddr": "0000:00:07.0", 00:08:35.873 "name": "Nvme1" 00:08:35.873 }, 00:08:35.873 "method": "bdev_nvme_attach_controller" 00:08:35.873 }, 00:08:35.873 { 00:08:35.873 "method": "bdev_wait_for_examine" 00:08:35.873 } 00:08:35.873 ] 00:08:35.873 } 00:08:35.873 ] 00:08:35.873 } 00:08:35.873 [2024-07-12 06:33:15.756696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.873 [2024-07-12 06:33:15.791274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.509  Copying: 57/64 [MB] (57 MBps) Copying: 64/64 [MB] (average 57 MBps) 00:08:37.509 00:08:37.509 00:08:37.509 real 0m1.705s 00:08:37.509 user 0m1.461s 00:08:37.509 sys 0m0.176s 00:08:37.509 06:33:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.509 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:08:37.509 ************************************ 00:08:37.509 END TEST dd_copy_to_out_bdev 00:08:37.509 ************************************ 00:08:37.509 06:33:17 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:37.509 06:33:17 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:37.509 06:33:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:37.509 06:33:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.509 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:08:37.509 ************************************ 00:08:37.509 START TEST dd_offset_magic 00:08:37.509 ************************************ 00:08:37.509 06:33:17 -- common/autotest_common.sh@1104 -- # offset_magic 00:08:37.509 06:33:17 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:37.509 06:33:17 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:37.509 06:33:17 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:37.509 06:33:17 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:37.509 06:33:17 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:37.509 06:33:17 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:37.509 06:33:17 -- dd/common.sh@31 -- # xtrace_disable 00:08:37.509 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:08:37.509 [2024-07-12 06:33:17.366031] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:37.509 [2024-07-12 06:33:17.366117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70675 ] 00:08:37.509 { 00:08:37.509 "subsystems": [ 00:08:37.509 { 00:08:37.509 "subsystem": "bdev", 00:08:37.509 "config": [ 00:08:37.509 { 00:08:37.509 "params": { 00:08:37.509 "trtype": "pcie", 00:08:37.509 "traddr": "0000:00:06.0", 00:08:37.509 "name": "Nvme0" 00:08:37.509 }, 00:08:37.509 "method": "bdev_nvme_attach_controller" 00:08:37.509 }, 00:08:37.509 { 00:08:37.509 "params": { 00:08:37.509 "trtype": "pcie", 00:08:37.509 "traddr": "0000:00:07.0", 00:08:37.509 "name": "Nvme1" 00:08:37.509 }, 00:08:37.509 "method": "bdev_nvme_attach_controller" 00:08:37.509 }, 00:08:37.509 { 00:08:37.509 "method": "bdev_wait_for_examine" 00:08:37.509 } 00:08:37.509 ] 00:08:37.510 } 00:08:37.510 ] 00:08:37.510 } 00:08:37.767 [2024-07-12 06:33:17.501773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.767 [2024-07-12 06:33:17.538423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.283  Copying: 65/65 [MB] (average 915 MBps) 00:08:38.283 00:08:38.283 06:33:17 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:38.283 06:33:17 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:38.283 06:33:17 -- dd/common.sh@31 -- # xtrace_disable 00:08:38.283 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:08:38.283 [2024-07-12 06:33:18.013770] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:38.283 [2024-07-12 06:33:18.013864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70689 ] 00:08:38.283 { 00:08:38.283 "subsystems": [ 00:08:38.283 { 00:08:38.283 "subsystem": "bdev", 00:08:38.283 "config": [ 00:08:38.283 { 00:08:38.283 "params": { 00:08:38.283 "trtype": "pcie", 00:08:38.283 "traddr": "0000:00:06.0", 00:08:38.283 "name": "Nvme0" 00:08:38.283 }, 00:08:38.283 "method": "bdev_nvme_attach_controller" 00:08:38.283 }, 00:08:38.283 { 00:08:38.283 "params": { 00:08:38.283 "trtype": "pcie", 00:08:38.283 "traddr": "0000:00:07.0", 00:08:38.283 "name": "Nvme1" 00:08:38.283 }, 00:08:38.283 "method": "bdev_nvme_attach_controller" 00:08:38.283 }, 00:08:38.283 { 00:08:38.283 "method": "bdev_wait_for_examine" 00:08:38.283 } 00:08:38.283 ] 00:08:38.283 } 00:08:38.283 ] 00:08:38.283 } 00:08:38.283 [2024-07-12 06:33:18.153874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.283 [2024-07-12 06:33:18.189568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.863  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:38.864 00:08:38.864 06:33:18 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:38.864 06:33:18 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:38.864 06:33:18 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:38.864 06:33:18 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:38.864 06:33:18 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:38.864 06:33:18 -- dd/common.sh@31 -- # xtrace_disable 00:08:38.864 06:33:18 -- common/autotest_common.sh@10 -- # set +x 00:08:38.864 [2024-07-12 06:33:18.589026] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:38.864 [2024-07-12 06:33:18.589110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70704 ] 00:08:38.864 { 00:08:38.864 "subsystems": [ 00:08:38.864 { 00:08:38.864 "subsystem": "bdev", 00:08:38.864 "config": [ 00:08:38.864 { 00:08:38.864 "params": { 00:08:38.864 "trtype": "pcie", 00:08:38.864 "traddr": "0000:00:06.0", 00:08:38.864 "name": "Nvme0" 00:08:38.864 }, 00:08:38.864 "method": "bdev_nvme_attach_controller" 00:08:38.864 }, 00:08:38.864 { 00:08:38.864 "params": { 00:08:38.864 "trtype": "pcie", 00:08:38.864 "traddr": "0000:00:07.0", 00:08:38.864 "name": "Nvme1" 00:08:38.864 }, 00:08:38.864 "method": "bdev_nvme_attach_controller" 00:08:38.864 }, 00:08:38.864 { 00:08:38.864 "method": "bdev_wait_for_examine" 00:08:38.864 } 00:08:38.864 ] 00:08:38.864 } 00:08:38.864 ] 00:08:38.864 } 00:08:38.864 [2024-07-12 06:33:18.728433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.864 [2024-07-12 06:33:18.768225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.378  Copying: 65/65 [MB] (average 1065 MBps) 00:08:39.378 00:08:39.378 06:33:19 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:39.378 06:33:19 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:39.378 06:33:19 -- dd/common.sh@31 -- # xtrace_disable 00:08:39.378 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:08:39.378 [2024-07-12 06:33:19.241790] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:39.378 [2024-07-12 06:33:19.241919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70723 ] 00:08:39.378 { 00:08:39.378 "subsystems": [ 00:08:39.378 { 00:08:39.378 "subsystem": "bdev", 00:08:39.378 "config": [ 00:08:39.378 { 00:08:39.378 "params": { 00:08:39.378 "trtype": "pcie", 00:08:39.378 "traddr": "0000:00:06.0", 00:08:39.378 "name": "Nvme0" 00:08:39.378 }, 00:08:39.378 "method": "bdev_nvme_attach_controller" 00:08:39.378 }, 00:08:39.378 { 00:08:39.378 "params": { 00:08:39.378 "trtype": "pcie", 00:08:39.378 "traddr": "0000:00:07.0", 00:08:39.378 "name": "Nvme1" 00:08:39.378 }, 00:08:39.378 "method": "bdev_nvme_attach_controller" 00:08:39.378 }, 00:08:39.378 { 00:08:39.378 "method": "bdev_wait_for_examine" 00:08:39.378 } 00:08:39.378 ] 00:08:39.378 } 00:08:39.378 ] 00:08:39.378 } 00:08:39.636 [2024-07-12 06:33:19.380084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.636 [2024-07-12 06:33:19.417197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.894  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:39.894 00:08:39.894 06:33:19 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:39.894 06:33:19 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:39.894 00:08:39.894 real 0m2.434s 00:08:39.894 user 0m1.750s 00:08:39.894 sys 0m0.496s 00:08:39.894 06:33:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.894 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:08:39.894 ************************************ 00:08:39.894 END TEST dd_offset_magic 00:08:39.894 ************************************ 00:08:39.894 06:33:19 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:39.894 06:33:19 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:39.894 06:33:19 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:39.894 06:33:19 -- dd/common.sh@11 -- # local nvme_ref= 00:08:39.894 06:33:19 -- dd/common.sh@12 -- # local size=4194330 00:08:39.894 06:33:19 -- dd/common.sh@14 -- # local bs=1048576 00:08:39.894 06:33:19 -- dd/common.sh@15 -- # local count=5 00:08:39.894 06:33:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:39.894 06:33:19 -- dd/common.sh@18 -- # gen_conf 00:08:39.894 06:33:19 -- dd/common.sh@31 -- # xtrace_disable 00:08:39.894 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:08:40.151 [2024-07-12 06:33:19.851701] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:40.151 [2024-07-12 06:33:19.851797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70753 ] 00:08:40.151 { 00:08:40.151 "subsystems": [ 00:08:40.151 { 00:08:40.151 "subsystem": "bdev", 00:08:40.151 "config": [ 00:08:40.151 { 00:08:40.151 "params": { 00:08:40.151 "trtype": "pcie", 00:08:40.151 "traddr": "0000:00:06.0", 00:08:40.151 "name": "Nvme0" 00:08:40.151 }, 00:08:40.151 "method": "bdev_nvme_attach_controller" 00:08:40.151 }, 00:08:40.151 { 00:08:40.151 "params": { 00:08:40.151 "trtype": "pcie", 00:08:40.151 "traddr": "0000:00:07.0", 00:08:40.151 "name": "Nvme1" 00:08:40.151 }, 00:08:40.151 "method": "bdev_nvme_attach_controller" 00:08:40.151 }, 00:08:40.151 { 00:08:40.151 "method": "bdev_wait_for_examine" 00:08:40.151 } 00:08:40.151 ] 00:08:40.151 } 00:08:40.151 ] 00:08:40.151 } 00:08:40.151 [2024-07-12 06:33:19.993908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.151 [2024-07-12 06:33:20.032889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.666  Copying: 5120/5120 [kB] (average 1666 MBps) 00:08:40.666 00:08:40.666 06:33:20 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:40.666 06:33:20 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:40.666 06:33:20 -- dd/common.sh@11 -- # local nvme_ref= 00:08:40.666 06:33:20 -- dd/common.sh@12 -- # local size=4194330 00:08:40.666 06:33:20 -- dd/common.sh@14 -- # local bs=1048576 00:08:40.666 06:33:20 -- dd/common.sh@15 -- # local count=5 00:08:40.666 06:33:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:40.667 06:33:20 -- dd/common.sh@18 -- # gen_conf 00:08:40.667 06:33:20 -- dd/common.sh@31 -- # xtrace_disable 00:08:40.667 06:33:20 -- common/autotest_common.sh@10 -- # set +x 00:08:40.667 [2024-07-12 06:33:20.418861] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:40.667 [2024-07-12 06:33:20.418970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70768 ] 00:08:40.667 { 00:08:40.667 "subsystems": [ 00:08:40.667 { 00:08:40.667 "subsystem": "bdev", 00:08:40.667 "config": [ 00:08:40.667 { 00:08:40.667 "params": { 00:08:40.667 "trtype": "pcie", 00:08:40.667 "traddr": "0000:00:06.0", 00:08:40.667 "name": "Nvme0" 00:08:40.667 }, 00:08:40.667 "method": "bdev_nvme_attach_controller" 00:08:40.667 }, 00:08:40.667 { 00:08:40.667 "params": { 00:08:40.667 "trtype": "pcie", 00:08:40.667 "traddr": "0000:00:07.0", 00:08:40.667 "name": "Nvme1" 00:08:40.667 }, 00:08:40.667 "method": "bdev_nvme_attach_controller" 00:08:40.667 }, 00:08:40.667 { 00:08:40.667 "method": "bdev_wait_for_examine" 00:08:40.667 } 00:08:40.667 ] 00:08:40.667 } 00:08:40.667 ] 00:08:40.667 } 00:08:40.667 [2024-07-12 06:33:20.561004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.926 [2024-07-12 06:33:20.597552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.184  Copying: 5120/5120 [kB] (average 833 MBps) 00:08:41.184 00:08:41.184 06:33:20 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:41.184 00:08:41.184 real 0m6.017s 00:08:41.184 user 0m4.310s 00:08:41.184 sys 0m1.203s 00:08:41.184 06:33:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.184 ************************************ 00:08:41.184 06:33:20 -- common/autotest_common.sh@10 -- # set +x 00:08:41.184 END TEST spdk_dd_bdev_to_bdev 00:08:41.184 ************************************ 00:08:41.184 06:33:20 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:41.184 06:33:20 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:41.184 06:33:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:41.184 06:33:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.184 06:33:20 -- common/autotest_common.sh@10 -- # set +x 00:08:41.184 ************************************ 00:08:41.184 START TEST spdk_dd_uring 00:08:41.184 ************************************ 00:08:41.184 06:33:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:41.184 * Looking for test storage... 00:08:41.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:41.184 06:33:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.184 06:33:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.184 06:33:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.184 06:33:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.184 06:33:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.184 06:33:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.184 06:33:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.184 06:33:21 -- paths/export.sh@5 -- # export PATH 00:08:41.184 06:33:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.184 06:33:21 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:41.184 06:33:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:41.184 06:33:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.184 06:33:21 -- common/autotest_common.sh@10 -- # set +x 00:08:41.184 ************************************ 00:08:41.184 START TEST dd_uring_copy 00:08:41.184 ************************************ 00:08:41.184 06:33:21 -- common/autotest_common.sh@1104 -- # uring_zram_copy 00:08:41.184 06:33:21 -- dd/uring.sh@15 -- # local zram_dev_id 00:08:41.184 06:33:21 -- dd/uring.sh@16 -- # local magic 00:08:41.184 06:33:21 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:41.184 06:33:21 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:41.184 06:33:21 -- dd/uring.sh@19 -- # local verify_magic 00:08:41.184 06:33:21 -- dd/uring.sh@21 -- # init_zram 00:08:41.184 06:33:21 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:41.184 06:33:21 -- dd/common.sh@164 -- # return 00:08:41.184 06:33:21 -- dd/uring.sh@22 -- # create_zram_dev 00:08:41.184 06:33:21 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:41.184 06:33:21 -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:41.184 06:33:21 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:41.184 06:33:21 -- dd/common.sh@181 -- # local id=1 00:08:41.184 06:33:21 -- dd/common.sh@182 -- # local size=512M 00:08:41.184 06:33:21 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:41.184 06:33:21 -- dd/common.sh@186 -- # echo 512M 00:08:41.184 06:33:21 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:41.184 06:33:21 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:41.184 06:33:21 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:41.184 06:33:21 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:41.184 06:33:21 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:41.184 06:33:21 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:41.441 06:33:21 -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:41.441 06:33:21 -- dd/common.sh@98 -- # xtrace_disable 00:08:41.441 06:33:21 -- common/autotest_common.sh@10 -- # set +x 00:08:41.441 06:33:21 -- dd/uring.sh@41 -- # magic=pq8kjlna68ob2i50cx8knrd6vupgfs4uhveo2fgialk5ste2p6zfa7cu2u0uq9en8278npnna31s9xjybcfk0kvm94da0wodwllvzhysbbbvt6pl233s5pwuj1woljnr3ma4phywfzhuru9nubc04xrcrdzyk4rlunp20rq63k57p79v9n7nfk3210uiphjmf1jkmui70c3dzc7zh2n2q2ae5w30iwtt3w6au1ykwhs0q3byc1tj43enoixwvjey3bjl3ybcv7f87u4v0d2zt98xwllknhvrwk7w56qq8pn5wy2as2kitsrcbo8ux88sjrfrkuy923wzmqr1hi9nxbum17vfuryxlqawxjc6q02cpdpn4uezsg2bfb7txm6cpjx66tcfpewfrrvhg3sv4519d07ux9x3muhcchglooih8c80c8b4h4mbdujzxwozpgvfeo0hpk0fok9uxvio86uumzlvu9z606lmkjg3hbpyelal21lrrjtbexp82cq6hnv4fu38adsz8okt8biikveh8gucb1ye5tauyavbvw88qcb22quu99ewpohkpi6itfhq2cz0vsfbopqz246pws6z6mx351bvyfl3sl05l4sdu20ibb5tzg5djhfxp69uttx5zmu3le54nvpsrde1avb8001xxapk930jyr8f07q7u706fhyjpnxipva5p89xho0o1lbz8hyulrsm4ggo0aqsjcbn4djboh0x4tmjporbkg18h7oc8jtivv8hvwv37ud5k5jj37kt9gzurr60k0wwxhe64yvl03nn3f328zdvct2o3watfa1y0e332qr91qfh5mzek2k8kjy9erwe5mstmg0exd8slg8tv4thlftft3md8c2xh669jhsdnlnefrvd18lat3lx66illn6rzxs08suwbgeaep3x9qazg8y3ibsiokfbjso2ozcell4qa7yewefxyy6x08eet1s5ydwayila722vlsa5jumhscf21pkon9o5uvz7t8942f8c 00:08:41.442 06:33:21 -- dd/uring.sh@42 -- # echo pq8kjlna68ob2i50cx8knrd6vupgfs4uhveo2fgialk5ste2p6zfa7cu2u0uq9en8278npnna31s9xjybcfk0kvm94da0wodwllvzhysbbbvt6pl233s5pwuj1woljnr3ma4phywfzhuru9nubc04xrcrdzyk4rlunp20rq63k57p79v9n7nfk3210uiphjmf1jkmui70c3dzc7zh2n2q2ae5w30iwtt3w6au1ykwhs0q3byc1tj43enoixwvjey3bjl3ybcv7f87u4v0d2zt98xwllknhvrwk7w56qq8pn5wy2as2kitsrcbo8ux88sjrfrkuy923wzmqr1hi9nxbum17vfuryxlqawxjc6q02cpdpn4uezsg2bfb7txm6cpjx66tcfpewfrrvhg3sv4519d07ux9x3muhcchglooih8c80c8b4h4mbdujzxwozpgvfeo0hpk0fok9uxvio86uumzlvu9z606lmkjg3hbpyelal21lrrjtbexp82cq6hnv4fu38adsz8okt8biikveh8gucb1ye5tauyavbvw88qcb22quu99ewpohkpi6itfhq2cz0vsfbopqz246pws6z6mx351bvyfl3sl05l4sdu20ibb5tzg5djhfxp69uttx5zmu3le54nvpsrde1avb8001xxapk930jyr8f07q7u706fhyjpnxipva5p89xho0o1lbz8hyulrsm4ggo0aqsjcbn4djboh0x4tmjporbkg18h7oc8jtivv8hvwv37ud5k5jj37kt9gzurr60k0wwxhe64yvl03nn3f328zdvct2o3watfa1y0e332qr91qfh5mzek2k8kjy9erwe5mstmg0exd8slg8tv4thlftft3md8c2xh669jhsdnlnefrvd18lat3lx66illn6rzxs08suwbgeaep3x9qazg8y3ibsiokfbjso2ozcell4qa7yewefxyy6x08eet1s5ydwayila722vlsa5jumhscf21pkon9o5uvz7t8942f8c 00:08:41.442 06:33:21 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:41.442 [2024-07-12 06:33:21.176382] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:41.442 [2024-07-12 06:33:21.176501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70830 ] 00:08:41.442 [2024-07-12 06:33:21.316248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.699 [2024-07-12 06:33:21.360685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.216  Copying: 511/511 [MB] (average 1528 MBps) 00:08:42.216 00:08:42.216 06:33:22 -- dd/uring.sh@54 -- # gen_conf 00:08:42.216 06:33:22 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:42.216 06:33:22 -- dd/common.sh@31 -- # xtrace_disable 00:08:42.217 06:33:22 -- common/autotest_common.sh@10 -- # set +x 00:08:42.217 [2024-07-12 06:33:22.125622] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:42.217 [2024-07-12 06:33:22.125762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70844 ] 00:08:42.476 { 00:08:42.476 "subsystems": [ 00:08:42.476 { 00:08:42.476 "subsystem": "bdev", 00:08:42.476 "config": [ 00:08:42.476 { 00:08:42.476 "params": { 00:08:42.476 "block_size": 512, 00:08:42.476 "num_blocks": 1048576, 00:08:42.476 "name": "malloc0" 00:08:42.476 }, 00:08:42.476 "method": "bdev_malloc_create" 00:08:42.476 }, 00:08:42.476 { 00:08:42.476 "params": { 00:08:42.476 "filename": "/dev/zram1", 00:08:42.476 "name": "uring0" 00:08:42.476 }, 00:08:42.476 "method": "bdev_uring_create" 00:08:42.476 }, 00:08:42.476 { 00:08:42.476 "method": "bdev_wait_for_examine" 00:08:42.476 } 00:08:42.476 ] 00:08:42.476 } 00:08:42.476 ] 00:08:42.476 } 00:08:42.476 [2024-07-12 06:33:22.264317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.476 [2024-07-12 06:33:22.300233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.611  Copying: 188/512 [MB] (188 MBps) Copying: 379/512 [MB] (191 MBps) Copying: 512/512 [MB] (average 189 MBps) 00:08:45.611 00:08:45.611 06:33:25 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:45.611 06:33:25 -- dd/uring.sh@60 -- # gen_conf 00:08:45.611 06:33:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:45.611 06:33:25 -- common/autotest_common.sh@10 -- # set +x 00:08:45.611 [2024-07-12 06:33:25.487134] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:45.611 [2024-07-12 06:33:25.487231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70893 ] 00:08:45.611 { 00:08:45.611 "subsystems": [ 00:08:45.611 { 00:08:45.611 "subsystem": "bdev", 00:08:45.611 "config": [ 00:08:45.611 { 00:08:45.611 "params": { 00:08:45.611 "block_size": 512, 00:08:45.611 "num_blocks": 1048576, 00:08:45.611 "name": "malloc0" 00:08:45.611 }, 00:08:45.611 "method": "bdev_malloc_create" 00:08:45.611 }, 00:08:45.611 { 00:08:45.611 "params": { 00:08:45.611 "filename": "/dev/zram1", 00:08:45.611 "name": "uring0" 00:08:45.611 }, 00:08:45.611 "method": "bdev_uring_create" 00:08:45.611 }, 00:08:45.611 { 00:08:45.611 "method": "bdev_wait_for_examine" 00:08:45.611 } 00:08:45.611 ] 00:08:45.611 } 00:08:45.611 ] 00:08:45.611 } 00:08:45.870 [2024-07-12 06:33:25.628075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.870 [2024-07-12 06:33:25.661739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.114  Copying: 138/512 [MB] (138 MBps) Copying: 267/512 [MB] (128 MBps) Copying: 402/512 [MB] (134 MBps) Copying: 512/512 [MB] (average 131 MBps) 00:08:50.114 00:08:50.114 06:33:29 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:50.114 06:33:29 -- dd/uring.sh@66 -- # [[ pq8kjlna68ob2i50cx8knrd6vupgfs4uhveo2fgialk5ste2p6zfa7cu2u0uq9en8278npnna31s9xjybcfk0kvm94da0wodwllvzhysbbbvt6pl233s5pwuj1woljnr3ma4phywfzhuru9nubc04xrcrdzyk4rlunp20rq63k57p79v9n7nfk3210uiphjmf1jkmui70c3dzc7zh2n2q2ae5w30iwtt3w6au1ykwhs0q3byc1tj43enoixwvjey3bjl3ybcv7f87u4v0d2zt98xwllknhvrwk7w56qq8pn5wy2as2kitsrcbo8ux88sjrfrkuy923wzmqr1hi9nxbum17vfuryxlqawxjc6q02cpdpn4uezsg2bfb7txm6cpjx66tcfpewfrrvhg3sv4519d07ux9x3muhcchglooih8c80c8b4h4mbdujzxwozpgvfeo0hpk0fok9uxvio86uumzlvu9z606lmkjg3hbpyelal21lrrjtbexp82cq6hnv4fu38adsz8okt8biikveh8gucb1ye5tauyavbvw88qcb22quu99ewpohkpi6itfhq2cz0vsfbopqz246pws6z6mx351bvyfl3sl05l4sdu20ibb5tzg5djhfxp69uttx5zmu3le54nvpsrde1avb8001xxapk930jyr8f07q7u706fhyjpnxipva5p89xho0o1lbz8hyulrsm4ggo0aqsjcbn4djboh0x4tmjporbkg18h7oc8jtivv8hvwv37ud5k5jj37kt9gzurr60k0wwxhe64yvl03nn3f328zdvct2o3watfa1y0e332qr91qfh5mzek2k8kjy9erwe5mstmg0exd8slg8tv4thlftft3md8c2xh669jhsdnlnefrvd18lat3lx66illn6rzxs08suwbgeaep3x9qazg8y3ibsiokfbjso2ozcell4qa7yewefxyy6x08eet1s5ydwayila722vlsa5jumhscf21pkon9o5uvz7t8942f8c == \p\q\8\k\j\l\n\a\6\8\o\b\2\i\5\0\c\x\8\k\n\r\d\6\v\u\p\g\f\s\4\u\h\v\e\o\2\f\g\i\a\l\k\5\s\t\e\2\p\6\z\f\a\7\c\u\2\u\0\u\q\9\e\n\8\2\7\8\n\p\n\n\a\3\1\s\9\x\j\y\b\c\f\k\0\k\v\m\9\4\d\a\0\w\o\d\w\l\l\v\z\h\y\s\b\b\b\v\t\6\p\l\2\3\3\s\5\p\w\u\j\1\w\o\l\j\n\r\3\m\a\4\p\h\y\w\f\z\h\u\r\u\9\n\u\b\c\0\4\x\r\c\r\d\z\y\k\4\r\l\u\n\p\2\0\r\q\6\3\k\5\7\p\7\9\v\9\n\7\n\f\k\3\2\1\0\u\i\p\h\j\m\f\1\j\k\m\u\i\7\0\c\3\d\z\c\7\z\h\2\n\2\q\2\a\e\5\w\3\0\i\w\t\t\3\w\6\a\u\1\y\k\w\h\s\0\q\3\b\y\c\1\t\j\4\3\e\n\o\i\x\w\v\j\e\y\3\b\j\l\3\y\b\c\v\7\f\8\7\u\4\v\0\d\2\z\t\9\8\x\w\l\l\k\n\h\v\r\w\k\7\w\5\6\q\q\8\p\n\5\w\y\2\a\s\2\k\i\t\s\r\c\b\o\8\u\x\8\8\s\j\r\f\r\k\u\y\9\2\3\w\z\m\q\r\1\h\i\9\n\x\b\u\m\1\7\v\f\u\r\y\x\l\q\a\w\x\j\c\6\q\0\2\c\p\d\p\n\4\u\e\z\s\g\2\b\f\b\7\t\x\m\6\c\p\j\x\6\6\t\c\f\p\e\w\f\r\r\v\h\g\3\s\v\4\5\1\9\d\0\7\u\x\9\x\3\m\u\h\c\c\h\g\l\o\o\i\h\8\c\8\0\c\8\b\4\h\4\m\b\d\u\j\z\x\w\o\z\p\g\v\f\e\o\0\h\p\k\0\f\o\k\9\u\x\v\i\o\8\6\u\u\m\z\l\v\u\9\z\6\0\6\l\m\k\j\g\3\h\b\p\y\e\l\a\l\2\1\l\r\r\j\t\b\e\x\p\8\2\c\q\6\h\n\v\4\f\u\3\8\a\d\s\z\8\o\k\t\8\b\i\i\k\v\e\h\8\g\u\c\b\1\y\e\5\t\a\u\y\a\v\b\v\w\8\8\q\c\b\2\2\q\u\u\9\9\e\w\p\o\h\k\p\i\6\i\t\f\h\q\2\c\z\0\v\s\f\b\o\p\q\z\2\4\6\p\w\s\6\z\6\m\x\3\5\1\b\v\y\f\l\3\s\l\0\5\l\4\s\d\u\2\0\i\b\b\5\t\z\g\5\d\j\h\f\x\p\6\9\u\t\t\x\5\z\m\u\3\l\e\5\4\n\v\p\s\r\d\e\1\a\v\b\8\0\0\1\x\x\a\p\k\9\3\0\j\y\r\8\f\0\7\q\7\u\7\0\6\f\h\y\j\p\n\x\i\p\v\a\5\p\8\9\x\h\o\0\o\1\l\b\z\8\h\y\u\l\r\s\m\4\g\g\o\0\a\q\s\j\c\b\n\4\d\j\b\o\h\0\x\4\t\m\j\p\o\r\b\k\g\1\8\h\7\o\c\8\j\t\i\v\v\8\h\v\w\v\3\7\u\d\5\k\5\j\j\3\7\k\t\9\g\z\u\r\r\6\0\k\0\w\w\x\h\e\6\4\y\v\l\0\3\n\n\3\f\3\2\8\z\d\v\c\t\2\o\3\w\a\t\f\a\1\y\0\e\3\3\2\q\r\9\1\q\f\h\5\m\z\e\k\2\k\8\k\j\y\9\e\r\w\e\5\m\s\t\m\g\0\e\x\d\8\s\l\g\8\t\v\4\t\h\l\f\t\f\t\3\m\d\8\c\2\x\h\6\6\9\j\h\s\d\n\l\n\e\f\r\v\d\1\8\l\a\t\3\l\x\6\6\i\l\l\n\6\r\z\x\s\0\8\s\u\w\b\g\e\a\e\p\3\x\9\q\a\z\g\8\y\3\i\b\s\i\o\k\f\b\j\s\o\2\o\z\c\e\l\l\4\q\a\7\y\e\w\e\f\x\y\y\6\x\0\8\e\e\t\1\s\5\y\d\w\a\y\i\l\a\7\2\2\v\l\s\a\5\j\u\m\h\s\c\f\2\1\p\k\o\n\9\o\5\u\v\z\7\t\8\9\4\2\f\8\c ]] 00:08:50.114 06:33:29 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:50.114 06:33:29 -- dd/uring.sh@69 -- # [[ pq8kjlna68ob2i50cx8knrd6vupgfs4uhveo2fgialk5ste2p6zfa7cu2u0uq9en8278npnna31s9xjybcfk0kvm94da0wodwllvzhysbbbvt6pl233s5pwuj1woljnr3ma4phywfzhuru9nubc04xrcrdzyk4rlunp20rq63k57p79v9n7nfk3210uiphjmf1jkmui70c3dzc7zh2n2q2ae5w30iwtt3w6au1ykwhs0q3byc1tj43enoixwvjey3bjl3ybcv7f87u4v0d2zt98xwllknhvrwk7w56qq8pn5wy2as2kitsrcbo8ux88sjrfrkuy923wzmqr1hi9nxbum17vfuryxlqawxjc6q02cpdpn4uezsg2bfb7txm6cpjx66tcfpewfrrvhg3sv4519d07ux9x3muhcchglooih8c80c8b4h4mbdujzxwozpgvfeo0hpk0fok9uxvio86uumzlvu9z606lmkjg3hbpyelal21lrrjtbexp82cq6hnv4fu38adsz8okt8biikveh8gucb1ye5tauyavbvw88qcb22quu99ewpohkpi6itfhq2cz0vsfbopqz246pws6z6mx351bvyfl3sl05l4sdu20ibb5tzg5djhfxp69uttx5zmu3le54nvpsrde1avb8001xxapk930jyr8f07q7u706fhyjpnxipva5p89xho0o1lbz8hyulrsm4ggo0aqsjcbn4djboh0x4tmjporbkg18h7oc8jtivv8hvwv37ud5k5jj37kt9gzurr60k0wwxhe64yvl03nn3f328zdvct2o3watfa1y0e332qr91qfh5mzek2k8kjy9erwe5mstmg0exd8slg8tv4thlftft3md8c2xh669jhsdnlnefrvd18lat3lx66illn6rzxs08suwbgeaep3x9qazg8y3ibsiokfbjso2ozcell4qa7yewefxyy6x08eet1s5ydwayila722vlsa5jumhscf21pkon9o5uvz7t8942f8c == \p\q\8\k\j\l\n\a\6\8\o\b\2\i\5\0\c\x\8\k\n\r\d\6\v\u\p\g\f\s\4\u\h\v\e\o\2\f\g\i\a\l\k\5\s\t\e\2\p\6\z\f\a\7\c\u\2\u\0\u\q\9\e\n\8\2\7\8\n\p\n\n\a\3\1\s\9\x\j\y\b\c\f\k\0\k\v\m\9\4\d\a\0\w\o\d\w\l\l\v\z\h\y\s\b\b\b\v\t\6\p\l\2\3\3\s\5\p\w\u\j\1\w\o\l\j\n\r\3\m\a\4\p\h\y\w\f\z\h\u\r\u\9\n\u\b\c\0\4\x\r\c\r\d\z\y\k\4\r\l\u\n\p\2\0\r\q\6\3\k\5\7\p\7\9\v\9\n\7\n\f\k\3\2\1\0\u\i\p\h\j\m\f\1\j\k\m\u\i\7\0\c\3\d\z\c\7\z\h\2\n\2\q\2\a\e\5\w\3\0\i\w\t\t\3\w\6\a\u\1\y\k\w\h\s\0\q\3\b\y\c\1\t\j\4\3\e\n\o\i\x\w\v\j\e\y\3\b\j\l\3\y\b\c\v\7\f\8\7\u\4\v\0\d\2\z\t\9\8\x\w\l\l\k\n\h\v\r\w\k\7\w\5\6\q\q\8\p\n\5\w\y\2\a\s\2\k\i\t\s\r\c\b\o\8\u\x\8\8\s\j\r\f\r\k\u\y\9\2\3\w\z\m\q\r\1\h\i\9\n\x\b\u\m\1\7\v\f\u\r\y\x\l\q\a\w\x\j\c\6\q\0\2\c\p\d\p\n\4\u\e\z\s\g\2\b\f\b\7\t\x\m\6\c\p\j\x\6\6\t\c\f\p\e\w\f\r\r\v\h\g\3\s\v\4\5\1\9\d\0\7\u\x\9\x\3\m\u\h\c\c\h\g\l\o\o\i\h\8\c\8\0\c\8\b\4\h\4\m\b\d\u\j\z\x\w\o\z\p\g\v\f\e\o\0\h\p\k\0\f\o\k\9\u\x\v\i\o\8\6\u\u\m\z\l\v\u\9\z\6\0\6\l\m\k\j\g\3\h\b\p\y\e\l\a\l\2\1\l\r\r\j\t\b\e\x\p\8\2\c\q\6\h\n\v\4\f\u\3\8\a\d\s\z\8\o\k\t\8\b\i\i\k\v\e\h\8\g\u\c\b\1\y\e\5\t\a\u\y\a\v\b\v\w\8\8\q\c\b\2\2\q\u\u\9\9\e\w\p\o\h\k\p\i\6\i\t\f\h\q\2\c\z\0\v\s\f\b\o\p\q\z\2\4\6\p\w\s\6\z\6\m\x\3\5\1\b\v\y\f\l\3\s\l\0\5\l\4\s\d\u\2\0\i\b\b\5\t\z\g\5\d\j\h\f\x\p\6\9\u\t\t\x\5\z\m\u\3\l\e\5\4\n\v\p\s\r\d\e\1\a\v\b\8\0\0\1\x\x\a\p\k\9\3\0\j\y\r\8\f\0\7\q\7\u\7\0\6\f\h\y\j\p\n\x\i\p\v\a\5\p\8\9\x\h\o\0\o\1\l\b\z\8\h\y\u\l\r\s\m\4\g\g\o\0\a\q\s\j\c\b\n\4\d\j\b\o\h\0\x\4\t\m\j\p\o\r\b\k\g\1\8\h\7\o\c\8\j\t\i\v\v\8\h\v\w\v\3\7\u\d\5\k\5\j\j\3\7\k\t\9\g\z\u\r\r\6\0\k\0\w\w\x\h\e\6\4\y\v\l\0\3\n\n\3\f\3\2\8\z\d\v\c\t\2\o\3\w\a\t\f\a\1\y\0\e\3\3\2\q\r\9\1\q\f\h\5\m\z\e\k\2\k\8\k\j\y\9\e\r\w\e\5\m\s\t\m\g\0\e\x\d\8\s\l\g\8\t\v\4\t\h\l\f\t\f\t\3\m\d\8\c\2\x\h\6\6\9\j\h\s\d\n\l\n\e\f\r\v\d\1\8\l\a\t\3\l\x\6\6\i\l\l\n\6\r\z\x\s\0\8\s\u\w\b\g\e\a\e\p\3\x\9\q\a\z\g\8\y\3\i\b\s\i\o\k\f\b\j\s\o\2\o\z\c\e\l\l\4\q\a\7\y\e\w\e\f\x\y\y\6\x\0\8\e\e\t\1\s\5\y\d\w\a\y\i\l\a\7\2\2\v\l\s\a\5\j\u\m\h\s\c\f\2\1\p\k\o\n\9\o\5\u\v\z\7\t\8\9\4\2\f\8\c ]] 00:08:50.114 06:33:29 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:50.681 06:33:30 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:50.681 06:33:30 -- dd/uring.sh@75 -- # gen_conf 00:08:50.681 06:33:30 -- dd/common.sh@31 -- # xtrace_disable 00:08:50.681 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:50.681 [2024-07-12 06:33:30.396369] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:50.681 [2024-07-12 06:33:30.396449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70961 ] 00:08:50.681 { 00:08:50.681 "subsystems": [ 00:08:50.681 { 00:08:50.681 "subsystem": "bdev", 00:08:50.681 "config": [ 00:08:50.681 { 00:08:50.681 "params": { 00:08:50.681 "block_size": 512, 00:08:50.681 "num_blocks": 1048576, 00:08:50.681 "name": "malloc0" 00:08:50.681 }, 00:08:50.681 "method": "bdev_malloc_create" 00:08:50.681 }, 00:08:50.681 { 00:08:50.681 "params": { 00:08:50.681 "filename": "/dev/zram1", 00:08:50.681 "name": "uring0" 00:08:50.681 }, 00:08:50.681 "method": "bdev_uring_create" 00:08:50.681 }, 00:08:50.681 { 00:08:50.681 "method": "bdev_wait_for_examine" 00:08:50.681 } 00:08:50.681 ] 00:08:50.681 } 00:08:50.681 ] 00:08:50.681 } 00:08:50.681 [2024-07-12 06:33:30.528507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.681 [2024-07-12 06:33:30.562354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.448  Copying: 154/512 [MB] (154 MBps) Copying: 307/512 [MB] (153 MBps) Copying: 461/512 [MB] (153 MBps) Copying: 512/512 [MB] (average 153 MBps) 00:08:54.448 00:08:54.448 06:33:34 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:54.448 06:33:34 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:54.448 06:33:34 -- dd/uring.sh@87 -- # : 00:08:54.448 06:33:34 -- dd/uring.sh@87 -- # : 00:08:54.448 06:33:34 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:54.448 06:33:34 -- dd/uring.sh@87 -- # gen_conf 00:08:54.448 06:33:34 -- dd/common.sh@31 -- # xtrace_disable 00:08:54.448 06:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:54.448 [2024-07-12 06:33:34.301024] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:54.448 [2024-07-12 06:33:34.301116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71013 ] 00:08:54.448 { 00:08:54.448 "subsystems": [ 00:08:54.448 { 00:08:54.448 "subsystem": "bdev", 00:08:54.448 "config": [ 00:08:54.448 { 00:08:54.448 "params": { 00:08:54.448 "block_size": 512, 00:08:54.448 "num_blocks": 1048576, 00:08:54.448 "name": "malloc0" 00:08:54.448 }, 00:08:54.448 "method": "bdev_malloc_create" 00:08:54.448 }, 00:08:54.448 { 00:08:54.448 "params": { 00:08:54.448 "filename": "/dev/zram1", 00:08:54.448 "name": "uring0" 00:08:54.448 }, 00:08:54.448 "method": "bdev_uring_create" 00:08:54.448 }, 00:08:54.448 { 00:08:54.448 "params": { 00:08:54.448 "name": "uring0" 00:08:54.448 }, 00:08:54.448 "method": "bdev_uring_delete" 00:08:54.448 }, 00:08:54.448 { 00:08:54.448 "method": "bdev_wait_for_examine" 00:08:54.448 } 00:08:54.448 ] 00:08:54.448 } 00:08:54.448 ] 00:08:54.448 } 00:08:54.705 [2024-07-12 06:33:34.438590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.705 [2024-07-12 06:33:34.472376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.972  Copying: 0/0 [B] (average 0 Bps) 00:08:54.972 00:08:54.972 06:33:34 -- dd/uring.sh@94 -- # : 00:08:54.972 06:33:34 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:54.972 06:33:34 -- dd/uring.sh@94 -- # gen_conf 00:08:54.972 06:33:34 -- common/autotest_common.sh@640 -- # local es=0 00:08:54.972 06:33:34 -- dd/common.sh@31 -- # xtrace_disable 00:08:54.972 06:33:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:54.972 06:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:54.972 06:33:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.972 06:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:54.972 06:33:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.972 06:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:54.972 06:33:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.972 06:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:54.972 06:33:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.972 06:33:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:54.972 06:33:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:55.248 [2024-07-12 06:33:34.919338] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:55.248 [2024-07-12 06:33:34.919435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71041 ] 00:08:55.248 { 00:08:55.248 "subsystems": [ 00:08:55.248 { 00:08:55.248 "subsystem": "bdev", 00:08:55.248 "config": [ 00:08:55.248 { 00:08:55.248 "params": { 00:08:55.248 "block_size": 512, 00:08:55.248 "num_blocks": 1048576, 00:08:55.248 "name": "malloc0" 00:08:55.248 }, 00:08:55.248 "method": "bdev_malloc_create" 00:08:55.248 }, 00:08:55.248 { 00:08:55.248 "params": { 00:08:55.248 "filename": "/dev/zram1", 00:08:55.248 "name": "uring0" 00:08:55.248 }, 00:08:55.248 "method": "bdev_uring_create" 00:08:55.248 }, 00:08:55.248 { 00:08:55.248 "params": { 00:08:55.248 "name": "uring0" 00:08:55.248 }, 00:08:55.248 "method": "bdev_uring_delete" 00:08:55.248 }, 00:08:55.248 { 00:08:55.248 "method": "bdev_wait_for_examine" 00:08:55.248 } 00:08:55.248 ] 00:08:55.248 } 00:08:55.248 ] 00:08:55.248 } 00:08:55.248 [2024-07-12 06:33:35.061450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.248 [2024-07-12 06:33:35.102111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.505 [2024-07-12 06:33:35.246073] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:55.505 [2024-07-12 06:33:35.246126] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:55.505 [2024-07-12 06:33:35.246138] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:08:55.505 [2024-07-12 06:33:35.246147] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.505 [2024-07-12 06:33:35.406743] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:55.763 06:33:35 -- common/autotest_common.sh@643 -- # es=237 00:08:55.763 06:33:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:55.763 06:33:35 -- common/autotest_common.sh@652 -- # es=109 00:08:55.763 06:33:35 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:55.763 06:33:35 -- common/autotest_common.sh@660 -- # es=1 00:08:55.763 06:33:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:55.763 06:33:35 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:55.763 06:33:35 -- dd/common.sh@172 -- # local id=1 00:08:55.763 06:33:35 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:55.763 06:33:35 -- dd/common.sh@176 -- # echo 1 00:08:55.763 06:33:35 -- dd/common.sh@177 -- # echo 1 00:08:55.763 06:33:35 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:56.022 00:08:56.022 real 0m14.646s 00:08:56.022 user 0m8.336s 00:08:56.022 sys 0m5.647s 00:08:56.022 06:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.022 06:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.022 ************************************ 00:08:56.022 END TEST dd_uring_copy 00:08:56.022 ************************************ 00:08:56.022 00:08:56.022 real 0m14.778s 00:08:56.022 user 0m8.399s 00:08:56.022 sys 0m5.719s 00:08:56.022 06:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.022 ************************************ 00:08:56.022 END TEST spdk_dd_uring 00:08:56.022 06:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.022 ************************************ 00:08:56.022 06:33:35 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:56.022 06:33:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:56.022 06:33:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:56.022 06:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.022 ************************************ 00:08:56.022 START TEST spdk_dd_sparse 00:08:56.022 ************************************ 00:08:56.022 06:33:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:56.022 * Looking for test storage... 00:08:56.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:56.022 06:33:35 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:56.022 06:33:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.022 06:33:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.022 06:33:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.022 06:33:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.022 06:33:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.022 06:33:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.022 06:33:35 -- paths/export.sh@5 -- # export PATH 00:08:56.022 06:33:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.022 06:33:35 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:56.022 06:33:35 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:56.022 06:33:35 -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:56.022 06:33:35 -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:56.022 06:33:35 -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:56.022 06:33:35 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:56.022 06:33:35 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:56.022 06:33:35 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:56.022 06:33:35 -- dd/sparse.sh@118 -- # prepare 00:08:56.022 06:33:35 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:56.022 06:33:35 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:56.022 1+0 records in 00:08:56.022 1+0 records out 00:08:56.022 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00452899 s, 926 MB/s 00:08:56.022 06:33:35 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:56.022 1+0 records in 00:08:56.022 1+0 records out 00:08:56.023 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00556738 s, 753 MB/s 00:08:56.023 06:33:35 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:56.023 1+0 records in 00:08:56.023 1+0 records out 00:08:56.023 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00424848 s, 987 MB/s 00:08:56.023 06:33:35 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:56.023 06:33:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:56.023 06:33:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:56.023 06:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.023 ************************************ 00:08:56.023 START TEST dd_sparse_file_to_file 00:08:56.023 ************************************ 00:08:56.023 06:33:35 -- common/autotest_common.sh@1104 -- # file_to_file 00:08:56.023 06:33:35 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:56.023 06:33:35 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:56.023 06:33:35 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:56.023 06:33:35 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:56.023 06:33:35 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:56.023 06:33:35 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:56.023 06:33:35 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:56.023 06:33:35 -- dd/sparse.sh@41 -- # gen_conf 00:08:56.023 06:33:35 -- dd/common.sh@31 -- # xtrace_disable 00:08:56.023 06:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.281 [2024-07-12 06:33:35.983409] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:56.281 [2024-07-12 06:33:35.983502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71126 ] 00:08:56.281 { 00:08:56.281 "subsystems": [ 00:08:56.281 { 00:08:56.281 "subsystem": "bdev", 00:08:56.281 "config": [ 00:08:56.281 { 00:08:56.281 "params": { 00:08:56.281 "block_size": 4096, 00:08:56.281 "filename": "dd_sparse_aio_disk", 00:08:56.281 "name": "dd_aio" 00:08:56.281 }, 00:08:56.281 "method": "bdev_aio_create" 00:08:56.281 }, 00:08:56.281 { 00:08:56.281 "params": { 00:08:56.281 "lvs_name": "dd_lvstore", 00:08:56.281 "bdev_name": "dd_aio" 00:08:56.281 }, 00:08:56.281 "method": "bdev_lvol_create_lvstore" 00:08:56.281 }, 00:08:56.281 { 00:08:56.281 "method": "bdev_wait_for_examine" 00:08:56.281 } 00:08:56.281 ] 00:08:56.281 } 00:08:56.281 ] 00:08:56.281 } 00:08:56.281 [2024-07-12 06:33:36.124367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.281 [2024-07-12 06:33:36.163763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.540  Copying: 12/36 [MB] (average 1333 MBps) 00:08:56.540 00:08:56.540 06:33:36 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:56.540 06:33:36 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:56.540 06:33:36 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:56.540 06:33:36 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:56.540 06:33:36 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:56.540 06:33:36 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:56.799 06:33:36 -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:56.799 06:33:36 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:56.799 06:33:36 -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:56.799 06:33:36 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:56.799 00:08:56.799 real 0m0.532s 00:08:56.799 user 0m0.283s 00:08:56.799 sys 0m0.155s 00:08:56.799 06:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.799 06:33:36 -- common/autotest_common.sh@10 -- # set +x 00:08:56.799 ************************************ 00:08:56.799 END TEST dd_sparse_file_to_file 00:08:56.799 ************************************ 00:08:56.799 06:33:36 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:56.799 06:33:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:56.799 06:33:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:56.799 06:33:36 -- common/autotest_common.sh@10 -- # set +x 00:08:56.799 ************************************ 00:08:56.799 START TEST dd_sparse_file_to_bdev 00:08:56.799 ************************************ 00:08:56.799 06:33:36 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:08:56.799 06:33:36 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:56.799 06:33:36 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:56.799 06:33:36 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:08:56.799 06:33:36 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:56.799 06:33:36 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:56.799 06:33:36 -- dd/sparse.sh@73 -- # gen_conf 00:08:56.799 06:33:36 -- dd/common.sh@31 -- # xtrace_disable 00:08:56.799 06:33:36 -- common/autotest_common.sh@10 -- # set +x 00:08:56.799 { 00:08:56.799 "subsystems": [ 00:08:56.799 { 00:08:56.799 "subsystem": "bdev", 00:08:56.799 "config": [ 00:08:56.799 { 00:08:56.799 "params": { 00:08:56.799 "block_size": 4096, 00:08:56.799 "filename": "dd_sparse_aio_disk", 00:08:56.799 "name": "dd_aio" 00:08:56.799 }, 00:08:56.799 "method": "bdev_aio_create" 00:08:56.799 }, 00:08:56.799 { 00:08:56.799 "params": { 00:08:56.799 "lvs_name": "dd_lvstore", 00:08:56.799 "lvol_name": "dd_lvol", 00:08:56.799 "size": 37748736, 00:08:56.799 "thin_provision": true 00:08:56.799 }, 00:08:56.799 "method": "bdev_lvol_create" 00:08:56.799 }, 00:08:56.799 { 00:08:56.799 "method": "bdev_wait_for_examine" 00:08:56.799 } 00:08:56.799 ] 00:08:56.799 } 00:08:56.799 ] 00:08:56.799 } 00:08:56.799 [2024-07-12 06:33:36.580548] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:56.799 [2024-07-12 06:33:36.580627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71161 ] 00:08:57.057 [2024-07-12 06:33:36.718596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.057 [2024-07-12 06:33:36.758839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.057 [2024-07-12 06:33:36.825632] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:08:57.057  Copying: 12/36 [MB] (average 600 MBps)[2024-07-12 06:33:36.861741] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:08:57.315 00:08:57.315 00:08:57.315 00:08:57.315 real 0m0.526s 00:08:57.315 user 0m0.312s 00:08:57.315 sys 0m0.132s 00:08:57.315 06:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.315 06:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:57.315 ************************************ 00:08:57.315 END TEST dd_sparse_file_to_bdev 00:08:57.315 ************************************ 00:08:57.315 06:33:37 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:57.316 06:33:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:57.316 06:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.316 06:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:57.316 ************************************ 00:08:57.316 START TEST dd_sparse_bdev_to_file 00:08:57.316 ************************************ 00:08:57.316 06:33:37 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:08:57.316 06:33:37 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:57.316 06:33:37 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:57.316 06:33:37 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:57.316 06:33:37 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:57.316 06:33:37 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:57.316 06:33:37 -- dd/sparse.sh@91 -- # gen_conf 00:08:57.316 06:33:37 -- dd/common.sh@31 -- # xtrace_disable 00:08:57.316 06:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:57.316 [2024-07-12 06:33:37.132169] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:57.316 [2024-07-12 06:33:37.132262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71198 ] 00:08:57.316 { 00:08:57.316 "subsystems": [ 00:08:57.316 { 00:08:57.316 "subsystem": "bdev", 00:08:57.316 "config": [ 00:08:57.316 { 00:08:57.316 "params": { 00:08:57.316 "block_size": 4096, 00:08:57.316 "filename": "dd_sparse_aio_disk", 00:08:57.316 "name": "dd_aio" 00:08:57.316 }, 00:08:57.316 "method": "bdev_aio_create" 00:08:57.316 }, 00:08:57.316 { 00:08:57.316 "method": "bdev_wait_for_examine" 00:08:57.316 } 00:08:57.316 ] 00:08:57.316 } 00:08:57.316 ] 00:08:57.316 } 00:08:57.573 [2024-07-12 06:33:37.269385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.573 [2024-07-12 06:33:37.304238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.832  Copying: 12/36 [MB] (average 1200 MBps) 00:08:57.832 00:08:57.832 06:33:37 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:57.832 06:33:37 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:57.832 06:33:37 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:57.832 06:33:37 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:57.832 06:33:37 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:57.832 06:33:37 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:57.832 06:33:37 -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:57.832 06:33:37 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:57.832 06:33:37 -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:57.832 06:33:37 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:57.832 00:08:57.832 real 0m0.492s 00:08:57.832 user 0m0.287s 00:08:57.832 sys 0m0.126s 00:08:57.832 06:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.832 ************************************ 00:08:57.832 END TEST dd_sparse_bdev_to_file 00:08:57.832 ************************************ 00:08:57.832 06:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:57.832 06:33:37 -- dd/sparse.sh@1 -- # cleanup 00:08:57.832 06:33:37 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:57.832 06:33:37 -- dd/sparse.sh@12 -- # rm file_zero1 00:08:57.832 06:33:37 -- dd/sparse.sh@13 -- # rm file_zero2 00:08:57.832 06:33:37 -- dd/sparse.sh@14 -- # rm file_zero3 00:08:57.832 00:08:57.832 real 0m1.823s 00:08:57.832 user 0m0.959s 00:08:57.832 sys 0m0.603s 00:08:57.832 06:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.832 06:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:57.832 ************************************ 00:08:57.833 END TEST spdk_dd_sparse 00:08:57.833 ************************************ 00:08:57.833 06:33:37 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:57.833 06:33:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:57.833 06:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.833 06:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:57.833 ************************************ 00:08:57.833 START TEST spdk_dd_negative 00:08:57.833 ************************************ 00:08:57.833 06:33:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:58.092 * Looking for test storage... 00:08:58.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:58.092 06:33:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.092 06:33:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.092 06:33:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.092 06:33:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.092 06:33:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.092 06:33:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.092 06:33:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.092 06:33:37 -- paths/export.sh@5 -- # export PATH 00:08:58.092 06:33:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.092 06:33:37 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.092 06:33:37 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.092 06:33:37 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.092 06:33:37 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.092 06:33:37 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:58.092 06:33:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.092 06:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.092 06:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:58.092 ************************************ 00:08:58.092 START TEST dd_invalid_arguments 00:08:58.092 ************************************ 00:08:58.092 06:33:37 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:08:58.092 06:33:37 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:58.092 06:33:37 -- common/autotest_common.sh@640 -- # local es=0 00:08:58.092 06:33:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:58.092 06:33:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.092 06:33:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.092 06:33:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.092 06:33:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.092 06:33:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.092 06:33:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.092 06:33:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.092 06:33:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.092 06:33:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:58.092 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:58.092 options: 00:08:58.092 -c, --config JSON config file (default none) 00:08:58.092 --json JSON config file (default none) 00:08:58.092 --json-ignore-init-errors 00:08:58.092 don't exit on invalid config entry 00:08:58.092 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:58.092 -g, --single-file-segments 00:08:58.092 force creating just one hugetlbfs file 00:08:58.092 -h, --help show this usage 00:08:58.092 -i, --shm-id shared memory ID (optional) 00:08:58.092 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:58.092 --lcores lcore to CPU mapping list. The list is in the format: 00:08:58.093 [<,lcores[@CPUs]>...] 00:08:58.093 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:58.093 Within the group, '-' is used for range separator, 00:08:58.093 ',' is used for single number separator. 00:08:58.093 '( )' can be omitted for single element group, 00:08:58.093 '@' can be omitted if cpus and lcores have the same value 00:08:58.093 -n, --mem-channels channel number of memory channels used for DPDK 00:08:58.093 -p, --main-core main (primary) core for DPDK 00:08:58.093 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:58.093 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:58.093 --disable-cpumask-locks Disable CPU core lock files. 00:08:58.093 --silence-noticelog disable notice level logging to stderr 00:08:58.093 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:58.093 -u, --no-pci disable PCI access 00:08:58.093 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:58.093 --max-delay maximum reactor delay (in microseconds) 00:08:58.093 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:58.093 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:58.093 -R, --huge-unlink unlink huge files after initialization 00:08:58.093 -v, --version print SPDK version 00:08:58.093 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:58.093 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:58.093 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:58.093 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:58.093 Tracepoints vary in size and can use more than one trace entry. 00:08:58.093 --rpcs-allowed comma-separated list of permitted RPCS 00:08:58.093 --env-context Opaque context for use of the env implementation 00:08:58.093 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:58.093 --no-huge run without using hugepages 00:08:58.093 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:58.093 -e, --tpoint-group [:] 00:08:58.093 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:08:58.093 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:58.093 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:58.093 [2024-07-12 06:33:37.839859] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:08:58.093 can be combined (e.g. thread,bdev:0x1). 00:08:58.093 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:58.093 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:58.093 [--------- DD Options ---------] 00:08:58.093 --if Input file. Must specify either --if or --ib. 00:08:58.093 --ib Input bdev. Must specifier either --if or --ib 00:08:58.093 --of Output file. Must specify either --of or --ob. 00:08:58.093 --ob Output bdev. Must specify either --of or --ob. 00:08:58.093 --iflag Input file flags. 00:08:58.093 --oflag Output file flags. 00:08:58.093 --bs I/O unit size (default: 4096) 00:08:58.093 --qd Queue depth (default: 2) 00:08:58.093 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:58.093 --skip Skip this many I/O units at start of input. (default: 0) 00:08:58.093 --seek Skip this many I/O units at start of output. (default: 0) 00:08:58.093 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:58.093 --sparse Enable hole skipping in input target 00:08:58.093 Available iflag and oflag values: 00:08:58.093 append - append mode 00:08:58.093 direct - use direct I/O for data 00:08:58.093 directory - fail unless a directory 00:08:58.093 dsync - use synchronized I/O for data 00:08:58.093 noatime - do not update access time 00:08:58.093 noctty - do not assign controlling terminal from file 00:08:58.093 nofollow - do not follow symlinks 00:08:58.093 nonblock - use non-blocking I/O 00:08:58.093 sync - use synchronized I/O for data and metadata 00:08:58.093 06:33:37 -- common/autotest_common.sh@643 -- # es=2 00:08:58.093 06:33:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:58.093 06:33:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:58.093 06:33:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:58.093 00:08:58.093 real 0m0.064s 00:08:58.093 user 0m0.040s 00:08:58.093 sys 0m0.023s 00:08:58.093 06:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.093 06:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:58.093 ************************************ 00:08:58.093 END TEST dd_invalid_arguments 00:08:58.093 ************************************ 00:08:58.093 06:33:37 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:58.093 06:33:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.093 06:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.093 06:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:58.093 ************************************ 00:08:58.093 START TEST dd_double_input 00:08:58.093 ************************************ 00:08:58.093 06:33:37 -- common/autotest_common.sh@1104 -- # double_input 00:08:58.093 06:33:37 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:58.093 06:33:37 -- common/autotest_common.sh@640 -- # local es=0 00:08:58.093 06:33:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:58.093 06:33:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.093 06:33:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.093 06:33:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.093 06:33:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.093 06:33:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.093 06:33:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.093 06:33:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.093 06:33:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.093 06:33:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:58.093 [2024-07-12 06:33:37.945902] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:58.093 06:33:37 -- common/autotest_common.sh@643 -- # es=22 00:08:58.093 06:33:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:58.093 06:33:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:58.093 06:33:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:58.093 00:08:58.093 real 0m0.059s 00:08:58.093 user 0m0.036s 00:08:58.093 sys 0m0.023s 00:08:58.093 06:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.093 06:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:58.093 ************************************ 00:08:58.093 END TEST dd_double_input 00:08:58.093 ************************************ 00:08:58.093 06:33:37 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:58.093 06:33:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.093 06:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.093 06:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:58.093 ************************************ 00:08:58.093 START TEST dd_double_output 00:08:58.093 ************************************ 00:08:58.093 06:33:38 -- common/autotest_common.sh@1104 -- # double_output 00:08:58.093 06:33:38 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:58.093 06:33:38 -- common/autotest_common.sh@640 -- # local es=0 00:08:58.093 06:33:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:58.093 06:33:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.093 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.093 06:33:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.353 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.353 06:33:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.353 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.353 06:33:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.353 06:33:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.353 06:33:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:58.353 [2024-07-12 06:33:38.057886] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:58.353 06:33:38 -- common/autotest_common.sh@643 -- # es=22 00:08:58.353 06:33:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:58.353 06:33:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:58.353 06:33:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:58.353 00:08:58.353 real 0m0.068s 00:08:58.353 user 0m0.041s 00:08:58.353 sys 0m0.026s 00:08:58.353 06:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.353 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:58.353 ************************************ 00:08:58.353 END TEST dd_double_output 00:08:58.353 ************************************ 00:08:58.353 06:33:38 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:58.353 06:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.353 06:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.353 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:58.353 ************************************ 00:08:58.353 START TEST dd_no_input 00:08:58.353 ************************************ 00:08:58.353 06:33:38 -- common/autotest_common.sh@1104 -- # no_input 00:08:58.353 06:33:38 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:58.353 06:33:38 -- common/autotest_common.sh@640 -- # local es=0 00:08:58.353 06:33:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:58.353 06:33:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.353 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.353 06:33:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.353 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.353 06:33:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.353 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.353 06:33:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.353 06:33:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.353 06:33:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:58.353 [2024-07-12 06:33:38.188426] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:08:58.353 06:33:38 -- common/autotest_common.sh@643 -- # es=22 00:08:58.353 06:33:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:58.353 06:33:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:58.353 06:33:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:58.353 00:08:58.353 real 0m0.087s 00:08:58.353 user 0m0.058s 00:08:58.353 sys 0m0.028s 00:08:58.353 06:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.353 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:58.353 ************************************ 00:08:58.353 END TEST dd_no_input 00:08:58.353 ************************************ 00:08:58.353 06:33:38 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:58.353 06:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.353 06:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.353 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:58.353 ************************************ 00:08:58.353 START TEST dd_no_output 00:08:58.353 ************************************ 00:08:58.353 06:33:38 -- common/autotest_common.sh@1104 -- # no_output 00:08:58.353 06:33:38 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.353 06:33:38 -- common/autotest_common.sh@640 -- # local es=0 00:08:58.353 06:33:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.353 06:33:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.353 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.353 06:33:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.353 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.353 06:33:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.353 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.353 06:33:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.353 06:33:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.353 06:33:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.612 [2024-07-12 06:33:38.300725] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:08:58.612 06:33:38 -- common/autotest_common.sh@643 -- # es=22 00:08:58.612 06:33:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:58.612 06:33:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:58.612 06:33:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:58.612 00:08:58.612 real 0m0.065s 00:08:58.612 user 0m0.040s 00:08:58.612 sys 0m0.024s 00:08:58.612 06:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.612 ************************************ 00:08:58.612 END TEST dd_no_output 00:08:58.612 ************************************ 00:08:58.612 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:58.612 06:33:38 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:58.612 06:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.612 06:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.612 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:58.612 ************************************ 00:08:58.612 START TEST dd_wrong_blocksize 00:08:58.612 ************************************ 00:08:58.612 06:33:38 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:08:58.612 06:33:38 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:58.612 06:33:38 -- common/autotest_common.sh@640 -- # local es=0 00:08:58.612 06:33:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:58.612 06:33:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.612 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.612 06:33:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.612 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.612 06:33:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.612 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.612 06:33:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.612 06:33:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.612 06:33:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:58.612 [2024-07-12 06:33:38.415473] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:08:58.612 06:33:38 -- common/autotest_common.sh@643 -- # es=22 00:08:58.612 06:33:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:58.612 06:33:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:58.612 06:33:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:58.612 00:08:58.612 real 0m0.067s 00:08:58.612 user 0m0.040s 00:08:58.612 sys 0m0.027s 00:08:58.612 06:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.613 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:58.613 ************************************ 00:08:58.613 END TEST dd_wrong_blocksize 00:08:58.613 ************************************ 00:08:58.613 06:33:38 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:58.613 06:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.613 06:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.613 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:58.613 ************************************ 00:08:58.613 START TEST dd_smaller_blocksize 00:08:58.613 ************************************ 00:08:58.613 06:33:38 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:08:58.613 06:33:38 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:58.613 06:33:38 -- common/autotest_common.sh@640 -- # local es=0 00:08:58.613 06:33:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:58.613 06:33:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.613 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.613 06:33:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.613 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.613 06:33:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.613 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:58.613 06:33:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.613 06:33:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.613 06:33:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:58.871 [2024-07-12 06:33:38.530771] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:58.871 [2024-07-12 06:33:38.530861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71408 ] 00:08:58.871 [2024-07-12 06:33:38.673644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.871 [2024-07-12 06:33:38.712149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.871 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:58.872 [2024-07-12 06:33:38.760995] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:58.872 [2024-07-12 06:33:38.761026] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:59.130 [2024-07-12 06:33:38.824320] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:59.130 06:33:38 -- common/autotest_common.sh@643 -- # es=244 00:08:59.130 06:33:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:59.130 06:33:38 -- common/autotest_common.sh@652 -- # es=116 00:08:59.130 06:33:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:59.130 06:33:38 -- common/autotest_common.sh@660 -- # es=1 00:08:59.130 06:33:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:59.130 00:08:59.130 real 0m0.417s 00:08:59.130 user 0m0.211s 00:08:59.130 sys 0m0.100s 00:08:59.130 06:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.130 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:59.130 ************************************ 00:08:59.130 END TEST dd_smaller_blocksize 00:08:59.130 ************************************ 00:08:59.130 06:33:38 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:59.130 06:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:59.130 06:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.131 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:59.131 ************************************ 00:08:59.131 START TEST dd_invalid_count 00:08:59.131 ************************************ 00:08:59.131 06:33:38 -- common/autotest_common.sh@1104 -- # invalid_count 00:08:59.131 06:33:38 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:59.131 06:33:38 -- common/autotest_common.sh@640 -- # local es=0 00:08:59.131 06:33:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:59.131 06:33:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.131 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.131 06:33:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.131 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.131 06:33:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.131 06:33:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.131 06:33:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.131 06:33:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.131 06:33:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:59.131 [2024-07-12 06:33:38.994467] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:08:59.131 06:33:39 -- common/autotest_common.sh@643 -- # es=22 00:08:59.131 06:33:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:59.131 06:33:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:59.131 06:33:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:59.131 00:08:59.131 real 0m0.067s 00:08:59.131 user 0m0.043s 00:08:59.131 sys 0m0.023s 00:08:59.131 06:33:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.131 06:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:59.131 ************************************ 00:08:59.131 END TEST dd_invalid_count 00:08:59.131 ************************************ 00:08:59.390 06:33:39 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:59.390 06:33:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:59.390 06:33:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.390 06:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:59.390 ************************************ 00:08:59.390 START TEST dd_invalid_oflag 00:08:59.390 ************************************ 00:08:59.390 06:33:39 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:08:59.390 06:33:39 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:59.390 06:33:39 -- common/autotest_common.sh@640 -- # local es=0 00:08:59.390 06:33:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:59.390 06:33:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.390 06:33:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.390 06:33:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.390 06:33:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:59.390 [2024-07-12 06:33:39.107447] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:08:59.390 06:33:39 -- common/autotest_common.sh@643 -- # es=22 00:08:59.390 06:33:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:59.390 06:33:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:59.390 06:33:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:59.390 00:08:59.390 real 0m0.064s 00:08:59.390 user 0m0.040s 00:08:59.390 sys 0m0.023s 00:08:59.390 06:33:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.390 06:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:59.390 ************************************ 00:08:59.390 END TEST dd_invalid_oflag 00:08:59.390 ************************************ 00:08:59.390 06:33:39 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:59.390 06:33:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:59.390 06:33:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.390 06:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:59.390 ************************************ 00:08:59.390 START TEST dd_invalid_iflag 00:08:59.390 ************************************ 00:08:59.390 06:33:39 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:08:59.390 06:33:39 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:59.390 06:33:39 -- common/autotest_common.sh@640 -- # local es=0 00:08:59.390 06:33:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:59.390 06:33:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.390 06:33:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.390 06:33:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.390 06:33:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:59.390 [2024-07-12 06:33:39.223820] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:08:59.390 06:33:39 -- common/autotest_common.sh@643 -- # es=22 00:08:59.390 06:33:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:59.390 06:33:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:59.390 06:33:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:59.390 00:08:59.390 real 0m0.066s 00:08:59.390 user 0m0.044s 00:08:59.390 sys 0m0.021s 00:08:59.390 06:33:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.390 06:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:59.390 ************************************ 00:08:59.390 END TEST dd_invalid_iflag 00:08:59.390 ************************************ 00:08:59.390 06:33:39 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:59.390 06:33:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:59.390 06:33:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.390 06:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:59.390 ************************************ 00:08:59.390 START TEST dd_unknown_flag 00:08:59.390 ************************************ 00:08:59.390 06:33:39 -- common/autotest_common.sh@1104 -- # unknown_flag 00:08:59.390 06:33:39 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:59.390 06:33:39 -- common/autotest_common.sh@640 -- # local es=0 00:08:59.390 06:33:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:59.390 06:33:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.390 06:33:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.390 06:33:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.390 06:33:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.390 06:33:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:59.649 [2024-07-12 06:33:39.338860] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:59.649 [2024-07-12 06:33:39.338948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71500 ] 00:08:59.649 [2024-07-12 06:33:39.477387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.649 [2024-07-12 06:33:39.519171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.908 [2024-07-12 06:33:39.570736] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:08:59.908 [2024-07-12 06:33:39.570809] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:59.908 [2024-07-12 06:33:39.570823] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:59.908 [2024-07-12 06:33:39.570837] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:59.908 [2024-07-12 06:33:39.639708] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:59.908 06:33:39 -- common/autotest_common.sh@643 -- # es=236 00:08:59.908 06:33:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:59.908 06:33:39 -- common/autotest_common.sh@652 -- # es=108 00:08:59.908 06:33:39 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:59.908 06:33:39 -- common/autotest_common.sh@660 -- # es=1 00:08:59.908 06:33:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:59.908 00:08:59.908 real 0m0.425s 00:08:59.908 user 0m0.218s 00:08:59.908 sys 0m0.103s 00:08:59.908 06:33:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.908 06:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:59.908 ************************************ 00:08:59.908 END TEST dd_unknown_flag 00:08:59.908 ************************************ 00:08:59.908 06:33:39 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:59.908 06:33:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:59.908 06:33:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.908 06:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:59.908 ************************************ 00:08:59.908 START TEST dd_invalid_json 00:08:59.908 ************************************ 00:08:59.908 06:33:39 -- common/autotest_common.sh@1104 -- # invalid_json 00:08:59.908 06:33:39 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:59.908 06:33:39 -- common/autotest_common.sh@640 -- # local es=0 00:08:59.908 06:33:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:59.908 06:33:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.908 06:33:39 -- dd/negative_dd.sh@95 -- # : 00:08:59.908 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.908 06:33:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.908 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.909 06:33:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.909 06:33:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:59.909 06:33:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.909 06:33:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.909 06:33:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:59.909 [2024-07-12 06:33:39.810669] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:59.909 [2024-07-12 06:33:39.810748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71528 ] 00:09:00.168 [2024-07-12 06:33:39.945029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.168 [2024-07-12 06:33:39.982308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.168 [2024-07-12 06:33:39.982447] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:09:00.168 [2024-07-12 06:33:39.982466] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:00.168 [2024-07-12 06:33:39.982520] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:00.168 06:33:40 -- common/autotest_common.sh@643 -- # es=234 00:09:00.168 06:33:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:00.168 06:33:40 -- common/autotest_common.sh@652 -- # es=106 00:09:00.168 06:33:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:00.168 06:33:40 -- common/autotest_common.sh@660 -- # es=1 00:09:00.168 06:33:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:00.168 00:09:00.168 real 0m0.289s 00:09:00.168 user 0m0.133s 00:09:00.168 sys 0m0.054s 00:09:00.168 06:33:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.168 06:33:40 -- common/autotest_common.sh@10 -- # set +x 00:09:00.168 ************************************ 00:09:00.168 END TEST dd_invalid_json 00:09:00.168 ************************************ 00:09:00.427 00:09:00.427 real 0m2.399s 00:09:00.427 user 0m1.154s 00:09:00.427 sys 0m0.898s 00:09:00.427 06:33:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.427 06:33:40 -- common/autotest_common.sh@10 -- # set +x 00:09:00.427 ************************************ 00:09:00.427 END TEST spdk_dd_negative 00:09:00.427 ************************************ 00:09:00.427 00:09:00.427 real 1m4.068s 00:09:00.427 user 0m38.915s 00:09:00.427 sys 0m15.852s 00:09:00.427 06:33:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.427 06:33:40 -- common/autotest_common.sh@10 -- # set +x 00:09:00.427 ************************************ 00:09:00.427 END TEST spdk_dd 00:09:00.427 ************************************ 00:09:00.427 06:33:40 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:09:00.427 06:33:40 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:09:00.427 06:33:40 -- spdk/autotest.sh@268 -- # timing_exit lib 00:09:00.427 06:33:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:00.427 06:33:40 -- common/autotest_common.sh@10 -- # set +x 00:09:00.427 06:33:40 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:09:00.427 06:33:40 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:09:00.427 06:33:40 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:09:00.427 06:33:40 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:09:00.427 06:33:40 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:09:00.427 06:33:40 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:09:00.427 06:33:40 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:00.427 06:33:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:00.427 06:33:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:00.427 06:33:40 -- common/autotest_common.sh@10 -- # set +x 00:09:00.427 ************************************ 00:09:00.427 START TEST nvmf_tcp 00:09:00.427 ************************************ 00:09:00.427 06:33:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:00.427 * Looking for test storage... 00:09:00.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:00.427 06:33:40 -- nvmf/nvmf.sh@10 -- # uname -s 00:09:00.427 06:33:40 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:00.427 06:33:40 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:00.427 06:33:40 -- nvmf/common.sh@7 -- # uname -s 00:09:00.427 06:33:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.427 06:33:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.427 06:33:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.427 06:33:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.427 06:33:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.427 06:33:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.427 06:33:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.427 06:33:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.427 06:33:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.427 06:33:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.427 06:33:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:09:00.427 06:33:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:09:00.427 06:33:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.428 06:33:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.428 06:33:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:00.428 06:33:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.428 06:33:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.428 06:33:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.428 06:33:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.428 06:33:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.428 06:33:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.428 06:33:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.428 06:33:40 -- paths/export.sh@5 -- # export PATH 00:09:00.428 06:33:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.428 06:33:40 -- nvmf/common.sh@46 -- # : 0 00:09:00.428 06:33:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:00.428 06:33:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:00.428 06:33:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:00.428 06:33:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.428 06:33:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.428 06:33:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:00.428 06:33:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:00.428 06:33:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:00.428 06:33:40 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:00.428 06:33:40 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:00.428 06:33:40 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:00.428 06:33:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:00.428 06:33:40 -- common/autotest_common.sh@10 -- # set +x 00:09:00.428 06:33:40 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:09:00.428 06:33:40 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:00.428 06:33:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:00.428 06:33:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:00.428 06:33:40 -- common/autotest_common.sh@10 -- # set +x 00:09:00.428 ************************************ 00:09:00.428 START TEST nvmf_host_management 00:09:00.428 ************************************ 00:09:00.428 06:33:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:00.687 * Looking for test storage... 00:09:00.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:00.687 06:33:40 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:00.687 06:33:40 -- nvmf/common.sh@7 -- # uname -s 00:09:00.687 06:33:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.687 06:33:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.687 06:33:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.687 06:33:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.687 06:33:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.687 06:33:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.687 06:33:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.687 06:33:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.687 06:33:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.687 06:33:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.687 06:33:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:09:00.687 06:33:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:09:00.687 06:33:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.687 06:33:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.687 06:33:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:00.687 06:33:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.687 06:33:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.687 06:33:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.687 06:33:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.687 06:33:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.687 06:33:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.687 06:33:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.687 06:33:40 -- paths/export.sh@5 -- # export PATH 00:09:00.687 06:33:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.687 06:33:40 -- nvmf/common.sh@46 -- # : 0 00:09:00.687 06:33:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:00.687 06:33:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:00.687 06:33:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:00.687 06:33:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.687 06:33:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.687 06:33:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:00.688 06:33:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:00.688 06:33:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:00.688 06:33:40 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.688 06:33:40 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.688 06:33:40 -- target/host_management.sh@104 -- # nvmftestinit 00:09:00.688 06:33:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:00.688 06:33:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.688 06:33:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:00.688 06:33:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:00.688 06:33:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:00.688 06:33:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.688 06:33:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.688 06:33:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.688 06:33:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:00.688 06:33:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:00.688 06:33:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:00.688 06:33:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:00.688 06:33:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:00.688 06:33:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:00.688 06:33:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.688 06:33:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.688 06:33:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:00.688 06:33:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:00.688 06:33:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:00.688 06:33:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:00.688 06:33:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:00.688 06:33:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.688 06:33:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:00.688 06:33:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:00.688 06:33:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:00.688 06:33:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:00.688 06:33:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:00.688 Cannot find device "nvmf_init_br" 00:09:00.688 06:33:40 -- nvmf/common.sh@153 -- # true 00:09:00.688 06:33:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:00.688 Cannot find device "nvmf_tgt_br" 00:09:00.688 06:33:40 -- nvmf/common.sh@154 -- # true 00:09:00.688 06:33:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.688 Cannot find device "nvmf_tgt_br2" 00:09:00.688 06:33:40 -- nvmf/common.sh@155 -- # true 00:09:00.688 06:33:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:00.688 Cannot find device "nvmf_init_br" 00:09:00.688 06:33:40 -- nvmf/common.sh@156 -- # true 00:09:00.688 06:33:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:00.688 Cannot find device "nvmf_tgt_br" 00:09:00.688 06:33:40 -- nvmf/common.sh@157 -- # true 00:09:00.688 06:33:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:00.688 Cannot find device "nvmf_tgt_br2" 00:09:00.688 06:33:40 -- nvmf/common.sh@158 -- # true 00:09:00.688 06:33:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:00.688 Cannot find device "nvmf_br" 00:09:00.688 06:33:40 -- nvmf/common.sh@159 -- # true 00:09:00.688 06:33:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:00.688 Cannot find device "nvmf_init_if" 00:09:00.688 06:33:40 -- nvmf/common.sh@160 -- # true 00:09:00.688 06:33:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.688 06:33:40 -- nvmf/common.sh@161 -- # true 00:09:00.688 06:33:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.688 06:33:40 -- nvmf/common.sh@162 -- # true 00:09:00.688 06:33:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:00.688 06:33:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:00.688 06:33:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:00.688 06:33:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:00.688 06:33:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:00.947 06:33:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:00.947 06:33:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:00.947 06:33:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:00.947 06:33:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:00.947 06:33:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:00.947 06:33:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:00.947 06:33:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:00.947 06:33:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:00.947 06:33:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:00.947 06:33:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:00.947 06:33:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:00.947 06:33:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:00.947 06:33:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:00.947 06:33:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:00.947 06:33:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:00.947 06:33:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:00.947 06:33:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:00.947 06:33:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:01.206 06:33:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:01.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:09:01.206 00:09:01.206 --- 10.0.0.2 ping statistics --- 00:09:01.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.206 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:09:01.206 06:33:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:01.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:01.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:09:01.206 00:09:01.206 --- 10.0.0.3 ping statistics --- 00:09:01.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.206 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:01.206 06:33:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:01.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:01.206 00:09:01.206 --- 10.0.0.1 ping statistics --- 00:09:01.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.206 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:01.206 06:33:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.206 06:33:40 -- nvmf/common.sh@421 -- # return 0 00:09:01.206 06:33:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:01.206 06:33:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.206 06:33:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:01.206 06:33:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:01.206 06:33:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.206 06:33:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:01.206 06:33:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:01.206 06:33:40 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:09:01.206 06:33:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:01.206 06:33:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:01.206 06:33:40 -- common/autotest_common.sh@10 -- # set +x 00:09:01.206 ************************************ 00:09:01.206 START TEST nvmf_host_management 00:09:01.206 ************************************ 00:09:01.206 06:33:40 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:09:01.206 06:33:40 -- target/host_management.sh@69 -- # starttarget 00:09:01.206 06:33:40 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:01.206 06:33:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:01.206 06:33:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:01.206 06:33:40 -- common/autotest_common.sh@10 -- # set +x 00:09:01.206 06:33:40 -- nvmf/common.sh@469 -- # nvmfpid=71786 00:09:01.206 06:33:40 -- nvmf/common.sh@470 -- # waitforlisten 71786 00:09:01.206 06:33:40 -- common/autotest_common.sh@819 -- # '[' -z 71786 ']' 00:09:01.206 06:33:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.206 06:33:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:01.206 06:33:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:01.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.206 06:33:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.206 06:33:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:01.206 06:33:40 -- common/autotest_common.sh@10 -- # set +x 00:09:01.206 [2024-07-12 06:33:40.981465] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:01.206 [2024-07-12 06:33:40.982018] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.465 [2024-07-12 06:33:41.124373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.465 [2024-07-12 06:33:41.175293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:01.465 [2024-07-12 06:33:41.175482] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.465 [2024-07-12 06:33:41.175497] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.465 [2024-07-12 06:33:41.175508] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.465 [2024-07-12 06:33:41.175671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.465 [2024-07-12 06:33:41.176416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.465 [2024-07-12 06:33:41.176591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:01.465 [2024-07-12 06:33:41.176600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.397 06:33:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:02.397 06:33:41 -- common/autotest_common.sh@852 -- # return 0 00:09:02.397 06:33:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:02.397 06:33:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:02.397 06:33:41 -- common/autotest_common.sh@10 -- # set +x 00:09:02.397 06:33:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.397 06:33:42 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.397 06:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:02.397 06:33:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.397 [2024-07-12 06:33:42.016554] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.397 06:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:02.397 06:33:42 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:02.397 06:33:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:02.397 06:33:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.397 06:33:42 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:02.397 06:33:42 -- target/host_management.sh@23 -- # cat 00:09:02.397 06:33:42 -- target/host_management.sh@30 -- # rpc_cmd 00:09:02.397 06:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:02.397 06:33:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.397 Malloc0 00:09:02.397 [2024-07-12 06:33:42.093342] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.397 06:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:02.397 06:33:42 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:02.397 06:33:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:02.397 06:33:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.397 06:33:42 -- target/host_management.sh@73 -- # perfpid=71845 00:09:02.397 06:33:42 -- target/host_management.sh@74 -- # waitforlisten 71845 /var/tmp/bdevperf.sock 00:09:02.397 06:33:42 -- common/autotest_common.sh@819 -- # '[' -z 71845 ']' 00:09:02.397 06:33:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:02.397 06:33:42 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:02.397 06:33:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:02.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:02.397 06:33:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:02.397 06:33:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:02.397 06:33:42 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:02.397 06:33:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.397 06:33:42 -- nvmf/common.sh@520 -- # config=() 00:09:02.397 06:33:42 -- nvmf/common.sh@520 -- # local subsystem config 00:09:02.397 06:33:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:02.397 06:33:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:02.397 { 00:09:02.397 "params": { 00:09:02.397 "name": "Nvme$subsystem", 00:09:02.397 "trtype": "$TEST_TRANSPORT", 00:09:02.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:02.397 "adrfam": "ipv4", 00:09:02.397 "trsvcid": "$NVMF_PORT", 00:09:02.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:02.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:02.397 "hdgst": ${hdgst:-false}, 00:09:02.397 "ddgst": ${ddgst:-false} 00:09:02.397 }, 00:09:02.397 "method": "bdev_nvme_attach_controller" 00:09:02.397 } 00:09:02.397 EOF 00:09:02.397 )") 00:09:02.397 06:33:42 -- nvmf/common.sh@542 -- # cat 00:09:02.397 06:33:42 -- nvmf/common.sh@544 -- # jq . 00:09:02.397 06:33:42 -- nvmf/common.sh@545 -- # IFS=, 00:09:02.397 06:33:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:02.397 "params": { 00:09:02.397 "name": "Nvme0", 00:09:02.397 "trtype": "tcp", 00:09:02.397 "traddr": "10.0.0.2", 00:09:02.397 "adrfam": "ipv4", 00:09:02.397 "trsvcid": "4420", 00:09:02.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:02.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:02.397 "hdgst": false, 00:09:02.397 "ddgst": false 00:09:02.397 }, 00:09:02.397 "method": "bdev_nvme_attach_controller" 00:09:02.397 }' 00:09:02.398 [2024-07-12 06:33:42.189038] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:02.398 [2024-07-12 06:33:42.189123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71845 ] 00:09:02.656 [2024-07-12 06:33:42.333073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.656 [2024-07-12 06:33:42.373718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.656 Running I/O for 10 seconds... 00:09:03.591 06:33:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:03.591 06:33:43 -- common/autotest_common.sh@852 -- # return 0 00:09:03.591 06:33:43 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:03.591 06:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.591 06:33:43 -- common/autotest_common.sh@10 -- # set +x 00:09:03.591 06:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.591 06:33:43 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.591 06:33:43 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:03.591 06:33:43 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:03.591 06:33:43 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:03.591 06:33:43 -- target/host_management.sh@52 -- # local ret=1 00:09:03.591 06:33:43 -- target/host_management.sh@53 -- # local i 00:09:03.591 06:33:43 -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:03.591 06:33:43 -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:03.591 06:33:43 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:03.591 06:33:43 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:03.591 06:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.591 06:33:43 -- common/autotest_common.sh@10 -- # set +x 00:09:03.591 06:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.591 06:33:43 -- target/host_management.sh@55 -- # read_io_count=1865 00:09:03.591 06:33:43 -- target/host_management.sh@58 -- # '[' 1865 -ge 100 ']' 00:09:03.591 06:33:43 -- target/host_management.sh@59 -- # ret=0 00:09:03.591 06:33:43 -- target/host_management.sh@60 -- # break 00:09:03.591 06:33:43 -- target/host_management.sh@64 -- # return 0 00:09:03.591 06:33:43 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:03.591 06:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.591 06:33:43 -- common/autotest_common.sh@10 -- # set +x 00:09:03.591 [2024-07-12 06:33:43.268987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a740 is same with the state(5) to be set 00:09:03.591 [2024-07-12 06:33:43.269289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.591 [2024-07-12 06:33:43.269321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.591 [2024-07-12 06:33:43.269343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.591 [2024-07-12 06:33:43.269355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.591 [2024-07-12 06:33:43.269366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.591 [2024-07-12 06:33:43.269376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.591 [2024-07-12 06:33:43.269387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.591 [2024-07-12 06:33:43.269396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.591 [2024-07-12 06:33:43.269407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.591 [2024-07-12 06:33:43.269417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.269984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.269993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.592 [2024-07-12 06:33:43.270665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.592 [2024-07-12 06:33:43.270676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23866d0 is same with the state(5) to be set 00:09:03.592 [2024-07-12 06:33:43.270730] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23866d0 was disconnected and freed. reset controller. 00:09:03.592 [2024-07-12 06:33:43.271901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:03.592 06:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.592 06:33:43 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:03.592 task offset: 126976 on job bdev=Nvme0n1 fails 00:09:03.592 00:09:03.592 Latency(us) 00:09:03.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.592 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:03.592 Job: Nvme0n1 ended in about 0.75 seconds with error 00:09:03.592 Verification LBA range: start 0x0 length 0x400 00:09:03.592 Nvme0n1 : 0.75 2647.20 165.45 84.88 0.00 23041.54 5510.98 32648.84 00:09:03.592 =================================================================================================================== 00:09:03.592 Total : 2647.20 165.45 84.88 0.00 23041.54 5510.98 32648.84 00:09:03.592 06:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.592 06:33:43 -- common/autotest_common.sh@10 -- # set +x 00:09:03.592 [2024-07-12 06:33:43.273946] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:03.592 [2024-07-12 06:33:43.273980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fc370 (9): Bad file descriptor 00:09:03.592 06:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.592 06:33:43 -- target/host_management.sh@87 -- # sleep 1 00:09:03.592 [2024-07-12 06:33:43.285733] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:04.526 06:33:44 -- target/host_management.sh@91 -- # kill -9 71845 00:09:04.526 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71845) - No such process 00:09:04.526 06:33:44 -- target/host_management.sh@91 -- # true 00:09:04.526 06:33:44 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:04.526 06:33:44 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:04.526 06:33:44 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:04.526 06:33:44 -- nvmf/common.sh@520 -- # config=() 00:09:04.526 06:33:44 -- nvmf/common.sh@520 -- # local subsystem config 00:09:04.526 06:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:04.526 06:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:04.526 { 00:09:04.526 "params": { 00:09:04.526 "name": "Nvme$subsystem", 00:09:04.526 "trtype": "$TEST_TRANSPORT", 00:09:04.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.526 "adrfam": "ipv4", 00:09:04.526 "trsvcid": "$NVMF_PORT", 00:09:04.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.526 "hdgst": ${hdgst:-false}, 00:09:04.526 "ddgst": ${ddgst:-false} 00:09:04.526 }, 00:09:04.526 "method": "bdev_nvme_attach_controller" 00:09:04.526 } 00:09:04.526 EOF 00:09:04.526 )") 00:09:04.526 06:33:44 -- nvmf/common.sh@542 -- # cat 00:09:04.526 06:33:44 -- nvmf/common.sh@544 -- # jq . 00:09:04.526 06:33:44 -- nvmf/common.sh@545 -- # IFS=, 00:09:04.526 06:33:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:04.526 "params": { 00:09:04.526 "name": "Nvme0", 00:09:04.526 "trtype": "tcp", 00:09:04.526 "traddr": "10.0.0.2", 00:09:04.526 "adrfam": "ipv4", 00:09:04.526 "trsvcid": "4420", 00:09:04.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:04.526 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:04.526 "hdgst": false, 00:09:04.526 "ddgst": false 00:09:04.526 }, 00:09:04.526 "method": "bdev_nvme_attach_controller" 00:09:04.526 }' 00:09:04.526 [2024-07-12 06:33:44.336663] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:04.526 [2024-07-12 06:33:44.336749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71883 ] 00:09:04.784 [2024-07-12 06:33:44.475050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.784 [2024-07-12 06:33:44.523861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.784 Running I/O for 1 seconds... 00:09:06.163 00:09:06.163 Latency(us) 00:09:06.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.163 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:06.163 Verification LBA range: start 0x0 length 0x400 00:09:06.163 Nvme0n1 : 1.01 2609.09 163.07 0.00 0.00 24142.08 1563.93 28835.84 00:09:06.163 =================================================================================================================== 00:09:06.163 Total : 2609.09 163.07 0.00 0.00 24142.08 1563.93 28835.84 00:09:06.163 06:33:45 -- target/host_management.sh@101 -- # stoptarget 00:09:06.163 06:33:45 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:06.163 06:33:45 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:06.163 06:33:45 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:06.163 06:33:45 -- target/host_management.sh@40 -- # nvmftestfini 00:09:06.163 06:33:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:06.163 06:33:45 -- nvmf/common.sh@116 -- # sync 00:09:06.163 06:33:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:06.163 06:33:45 -- nvmf/common.sh@119 -- # set +e 00:09:06.163 06:33:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:06.163 06:33:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:06.163 rmmod nvme_tcp 00:09:06.163 rmmod nvme_fabrics 00:09:06.163 rmmod nvme_keyring 00:09:06.163 06:33:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:06.163 06:33:46 -- nvmf/common.sh@123 -- # set -e 00:09:06.163 06:33:46 -- nvmf/common.sh@124 -- # return 0 00:09:06.163 06:33:46 -- nvmf/common.sh@477 -- # '[' -n 71786 ']' 00:09:06.163 06:33:46 -- nvmf/common.sh@478 -- # killprocess 71786 00:09:06.163 06:33:46 -- common/autotest_common.sh@926 -- # '[' -z 71786 ']' 00:09:06.163 06:33:46 -- common/autotest_common.sh@930 -- # kill -0 71786 00:09:06.163 06:33:46 -- common/autotest_common.sh@931 -- # uname 00:09:06.163 06:33:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:06.163 06:33:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71786 00:09:06.163 killing process with pid 71786 00:09:06.163 06:33:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:06.163 06:33:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:06.163 06:33:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71786' 00:09:06.163 06:33:46 -- common/autotest_common.sh@945 -- # kill 71786 00:09:06.163 06:33:46 -- common/autotest_common.sh@950 -- # wait 71786 00:09:06.422 [2024-07-12 06:33:46.190597] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:06.422 06:33:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:06.422 06:33:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:06.422 06:33:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:06.422 06:33:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.422 06:33:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:06.422 06:33:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.422 06:33:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.422 06:33:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.422 06:33:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:06.422 00:09:06.422 real 0m5.337s 00:09:06.422 user 0m22.576s 00:09:06.422 sys 0m1.224s 00:09:06.422 06:33:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.422 06:33:46 -- common/autotest_common.sh@10 -- # set +x 00:09:06.422 ************************************ 00:09:06.422 END TEST nvmf_host_management 00:09:06.422 ************************************ 00:09:06.422 06:33:46 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:09:06.422 00:09:06.422 ************************************ 00:09:06.422 END TEST nvmf_host_management 00:09:06.422 ************************************ 00:09:06.422 real 0m5.962s 00:09:06.422 user 0m22.698s 00:09:06.422 sys 0m1.460s 00:09:06.422 06:33:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.422 06:33:46 -- common/autotest_common.sh@10 -- # set +x 00:09:06.422 06:33:46 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:06.422 06:33:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:06.422 06:33:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.422 06:33:46 -- common/autotest_common.sh@10 -- # set +x 00:09:06.422 ************************************ 00:09:06.422 START TEST nvmf_lvol 00:09:06.422 ************************************ 00:09:06.422 06:33:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:06.681 * Looking for test storage... 00:09:06.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:06.681 06:33:46 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:06.681 06:33:46 -- nvmf/common.sh@7 -- # uname -s 00:09:06.681 06:33:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.681 06:33:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.681 06:33:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.681 06:33:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.681 06:33:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.681 06:33:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.681 06:33:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.681 06:33:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.681 06:33:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.681 06:33:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.681 06:33:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:09:06.681 06:33:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:09:06.681 06:33:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.681 06:33:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.681 06:33:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:06.681 06:33:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.681 06:33:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.681 06:33:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.681 06:33:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.681 06:33:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.681 06:33:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.681 06:33:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.681 06:33:46 -- paths/export.sh@5 -- # export PATH 00:09:06.681 06:33:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.681 06:33:46 -- nvmf/common.sh@46 -- # : 0 00:09:06.681 06:33:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:06.681 06:33:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:06.681 06:33:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:06.681 06:33:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.681 06:33:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.681 06:33:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:06.681 06:33:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:06.681 06:33:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:06.681 06:33:46 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:06.681 06:33:46 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:06.681 06:33:46 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:06.681 06:33:46 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:06.681 06:33:46 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.681 06:33:46 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:06.681 06:33:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:06.681 06:33:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.681 06:33:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:06.681 06:33:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:06.681 06:33:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:06.681 06:33:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.681 06:33:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.681 06:33:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.681 06:33:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:06.681 06:33:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:06.681 06:33:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:06.681 06:33:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:06.681 06:33:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:06.681 06:33:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:06.681 06:33:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.681 06:33:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.681 06:33:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:06.681 06:33:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:06.681 06:33:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:06.681 06:33:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:06.681 06:33:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:06.681 06:33:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.681 06:33:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:06.681 06:33:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:06.681 06:33:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:06.681 06:33:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:06.681 06:33:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:06.681 06:33:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:06.681 Cannot find device "nvmf_tgt_br" 00:09:06.681 06:33:46 -- nvmf/common.sh@154 -- # true 00:09:06.681 06:33:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:06.681 Cannot find device "nvmf_tgt_br2" 00:09:06.681 06:33:46 -- nvmf/common.sh@155 -- # true 00:09:06.681 06:33:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:06.681 06:33:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:06.681 Cannot find device "nvmf_tgt_br" 00:09:06.681 06:33:46 -- nvmf/common.sh@157 -- # true 00:09:06.681 06:33:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:06.681 Cannot find device "nvmf_tgt_br2" 00:09:06.681 06:33:46 -- nvmf/common.sh@158 -- # true 00:09:06.681 06:33:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:06.681 06:33:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:06.681 06:33:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.681 06:33:46 -- nvmf/common.sh@161 -- # true 00:09:06.681 06:33:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.681 06:33:46 -- nvmf/common.sh@162 -- # true 00:09:06.681 06:33:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:06.681 06:33:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:06.681 06:33:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:06.681 06:33:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:06.681 06:33:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:06.940 06:33:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:06.940 06:33:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:06.940 06:33:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:06.940 06:33:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:06.940 06:33:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:06.940 06:33:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:06.940 06:33:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:06.940 06:33:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:06.940 06:33:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:06.940 06:33:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:06.940 06:33:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:06.940 06:33:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:06.940 06:33:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:06.940 06:33:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:06.940 06:33:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:06.940 06:33:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:06.940 06:33:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:06.940 06:33:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:06.940 06:33:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:06.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:09:06.940 00:09:06.940 --- 10.0.0.2 ping statistics --- 00:09:06.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.940 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:06.940 06:33:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:06.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:06.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:09:06.940 00:09:06.940 --- 10.0.0.3 ping statistics --- 00:09:06.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.940 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:06.940 06:33:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:06.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:06.940 00:09:06.940 --- 10.0.0.1 ping statistics --- 00:09:06.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.940 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:06.940 06:33:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.940 06:33:46 -- nvmf/common.sh@421 -- # return 0 00:09:06.940 06:33:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:06.940 06:33:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.940 06:33:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:06.940 06:33:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:06.940 06:33:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.940 06:33:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:06.940 06:33:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:06.940 06:33:46 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:06.940 06:33:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:06.940 06:33:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:06.940 06:33:46 -- common/autotest_common.sh@10 -- # set +x 00:09:06.940 06:33:46 -- nvmf/common.sh@469 -- # nvmfpid=72118 00:09:06.940 06:33:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:06.940 06:33:46 -- nvmf/common.sh@470 -- # waitforlisten 72118 00:09:06.940 06:33:46 -- common/autotest_common.sh@819 -- # '[' -z 72118 ']' 00:09:06.940 06:33:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.940 06:33:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:06.940 06:33:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.940 06:33:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:06.940 06:33:46 -- common/autotest_common.sh@10 -- # set +x 00:09:06.940 [2024-07-12 06:33:46.835924] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:06.940 [2024-07-12 06:33:46.836050] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.199 [2024-07-12 06:33:46.979459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:07.199 [2024-07-12 06:33:47.020238] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:07.199 [2024-07-12 06:33:47.020658] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.199 [2024-07-12 06:33:47.020803] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.199 [2024-07-12 06:33:47.020994] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.199 [2024-07-12 06:33:47.021383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.199 [2024-07-12 06:33:47.021455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.199 [2024-07-12 06:33:47.021461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.133 06:33:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:08.133 06:33:47 -- common/autotest_common.sh@852 -- # return 0 00:09:08.133 06:33:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:08.133 06:33:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:08.133 06:33:47 -- common/autotest_common.sh@10 -- # set +x 00:09:08.134 06:33:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.134 06:33:47 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:08.392 [2024-07-12 06:33:48.087594] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.392 06:33:48 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:08.651 06:33:48 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:08.651 06:33:48 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:08.909 06:33:48 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:08.909 06:33:48 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:09.168 06:33:48 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:09.426 06:33:49 -- target/nvmf_lvol.sh@29 -- # lvs=3e6b1cf7-a517-44e3-9cca-5ccace31fa64 00:09:09.426 06:33:49 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3e6b1cf7-a517-44e3-9cca-5ccace31fa64 lvol 20 00:09:09.684 06:33:49 -- target/nvmf_lvol.sh@32 -- # lvol=06ceac5d-179f-4801-9ae4-6f4da3e718e1 00:09:09.684 06:33:49 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:09.944 06:33:49 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 06ceac5d-179f-4801-9ae4-6f4da3e718e1 00:09:10.203 06:33:50 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:10.463 [2024-07-12 06:33:50.282563] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.463 06:33:50 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.721 06:33:50 -- target/nvmf_lvol.sh@42 -- # perf_pid=72188 00:09:10.721 06:33:50 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:10.721 06:33:50 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:11.656 06:33:51 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 06ceac5d-179f-4801-9ae4-6f4da3e718e1 MY_SNAPSHOT 00:09:12.223 06:33:51 -- target/nvmf_lvol.sh@47 -- # snapshot=3447ec37-bc84-4684-9d77-576d568709aa 00:09:12.223 06:33:51 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 06ceac5d-179f-4801-9ae4-6f4da3e718e1 30 00:09:12.481 06:33:52 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 3447ec37-bc84-4684-9d77-576d568709aa MY_CLONE 00:09:12.740 06:33:52 -- target/nvmf_lvol.sh@49 -- # clone=9651e8b4-0799-404d-a7ce-ad12e608a34f 00:09:12.740 06:33:52 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 9651e8b4-0799-404d-a7ce-ad12e608a34f 00:09:12.999 06:33:52 -- target/nvmf_lvol.sh@53 -- # wait 72188 00:09:21.135 Initializing NVMe Controllers 00:09:21.135 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:21.135 Controller IO queue size 128, less than required. 00:09:21.135 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:21.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:21.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:21.135 Initialization complete. Launching workers. 00:09:21.135 ======================================================== 00:09:21.135 Latency(us) 00:09:21.135 Device Information : IOPS MiB/s Average min max 00:09:21.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9468.39 36.99 13526.22 2174.20 76603.27 00:09:21.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9432.79 36.85 13568.44 2790.37 46774.15 00:09:21.135 ======================================================== 00:09:21.135 Total : 18901.18 73.83 13547.29 2174.20 76603.27 00:09:21.135 00:09:21.135 06:34:00 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:21.393 06:34:01 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 06ceac5d-179f-4801-9ae4-6f4da3e718e1 00:09:21.393 06:34:01 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e6b1cf7-a517-44e3-9cca-5ccace31fa64 00:09:21.650 06:34:01 -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:21.650 06:34:01 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:21.650 06:34:01 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:21.650 06:34:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:21.650 06:34:01 -- nvmf/common.sh@116 -- # sync 00:09:21.908 06:34:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:21.908 06:34:01 -- nvmf/common.sh@119 -- # set +e 00:09:21.908 06:34:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:21.908 06:34:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:21.908 rmmod nvme_tcp 00:09:21.908 rmmod nvme_fabrics 00:09:21.908 rmmod nvme_keyring 00:09:21.908 06:34:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:21.908 06:34:01 -- nvmf/common.sh@123 -- # set -e 00:09:21.908 06:34:01 -- nvmf/common.sh@124 -- # return 0 00:09:21.908 06:34:01 -- nvmf/common.sh@477 -- # '[' -n 72118 ']' 00:09:21.908 06:34:01 -- nvmf/common.sh@478 -- # killprocess 72118 00:09:21.908 06:34:01 -- common/autotest_common.sh@926 -- # '[' -z 72118 ']' 00:09:21.908 06:34:01 -- common/autotest_common.sh@930 -- # kill -0 72118 00:09:21.908 06:34:01 -- common/autotest_common.sh@931 -- # uname 00:09:21.908 06:34:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:21.908 06:34:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72118 00:09:21.908 killing process with pid 72118 00:09:21.908 06:34:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:21.908 06:34:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:21.908 06:34:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72118' 00:09:21.908 06:34:01 -- common/autotest_common.sh@945 -- # kill 72118 00:09:21.908 06:34:01 -- common/autotest_common.sh@950 -- # wait 72118 00:09:22.166 06:34:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:22.166 06:34:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:22.166 06:34:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:22.166 06:34:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.166 06:34:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:22.166 06:34:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.166 06:34:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.166 06:34:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.166 06:34:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:22.166 ************************************ 00:09:22.166 END TEST nvmf_lvol 00:09:22.166 ************************************ 00:09:22.166 00:09:22.166 real 0m15.571s 00:09:22.166 user 1m4.694s 00:09:22.166 sys 0m4.701s 00:09:22.167 06:34:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.167 06:34:01 -- common/autotest_common.sh@10 -- # set +x 00:09:22.167 06:34:01 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:22.167 06:34:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:22.167 06:34:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:22.167 06:34:01 -- common/autotest_common.sh@10 -- # set +x 00:09:22.167 ************************************ 00:09:22.167 START TEST nvmf_lvs_grow 00:09:22.167 ************************************ 00:09:22.167 06:34:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:22.167 * Looking for test storage... 00:09:22.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:22.167 06:34:02 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:22.167 06:34:02 -- nvmf/common.sh@7 -- # uname -s 00:09:22.167 06:34:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.167 06:34:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.167 06:34:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.167 06:34:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.167 06:34:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.167 06:34:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.167 06:34:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.167 06:34:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.167 06:34:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.167 06:34:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.167 06:34:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:09:22.167 06:34:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:09:22.167 06:34:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.167 06:34:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.167 06:34:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:22.167 06:34:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.167 06:34:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.167 06:34:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.167 06:34:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.167 06:34:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.167 06:34:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.167 06:34:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.167 06:34:02 -- paths/export.sh@5 -- # export PATH 00:09:22.167 06:34:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.167 06:34:02 -- nvmf/common.sh@46 -- # : 0 00:09:22.167 06:34:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:22.167 06:34:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:22.167 06:34:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:22.167 06:34:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.167 06:34:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.167 06:34:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:22.167 06:34:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:22.167 06:34:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:22.167 06:34:02 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:22.167 06:34:02 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:22.167 06:34:02 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:09:22.167 06:34:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:22.167 06:34:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.167 06:34:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:22.167 06:34:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:22.167 06:34:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:22.167 06:34:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.167 06:34:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.167 06:34:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.167 06:34:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:22.167 06:34:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:22.167 06:34:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:22.167 06:34:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:22.167 06:34:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:22.167 06:34:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:22.167 06:34:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.167 06:34:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.167 06:34:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:22.167 06:34:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:22.167 06:34:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:22.167 06:34:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:22.167 06:34:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:22.167 06:34:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.167 06:34:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:22.167 06:34:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:22.167 06:34:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:22.167 06:34:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:22.167 06:34:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:22.426 06:34:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:22.426 Cannot find device "nvmf_tgt_br" 00:09:22.426 06:34:02 -- nvmf/common.sh@154 -- # true 00:09:22.426 06:34:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.426 Cannot find device "nvmf_tgt_br2" 00:09:22.426 06:34:02 -- nvmf/common.sh@155 -- # true 00:09:22.426 06:34:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:22.426 06:34:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:22.426 Cannot find device "nvmf_tgt_br" 00:09:22.426 06:34:02 -- nvmf/common.sh@157 -- # true 00:09:22.426 06:34:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:22.426 Cannot find device "nvmf_tgt_br2" 00:09:22.426 06:34:02 -- nvmf/common.sh@158 -- # true 00:09:22.426 06:34:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:22.426 06:34:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:22.426 06:34:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:22.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.426 06:34:02 -- nvmf/common.sh@161 -- # true 00:09:22.426 06:34:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.426 06:34:02 -- nvmf/common.sh@162 -- # true 00:09:22.426 06:34:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:22.426 06:34:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:22.426 06:34:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:22.426 06:34:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:22.426 06:34:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:22.426 06:34:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:22.426 06:34:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:22.426 06:34:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:22.426 06:34:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:22.426 06:34:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:22.426 06:34:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:22.426 06:34:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:22.426 06:34:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:22.426 06:34:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:22.426 06:34:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:22.426 06:34:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:22.426 06:34:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:22.685 06:34:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:22.685 06:34:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:22.685 06:34:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:22.685 06:34:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:22.685 06:34:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:22.685 06:34:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:22.685 06:34:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:22.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:09:22.685 00:09:22.685 --- 10.0.0.2 ping statistics --- 00:09:22.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.685 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:22.685 06:34:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:22.685 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:22.685 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:09:22.685 00:09:22.685 --- 10.0.0.3 ping statistics --- 00:09:22.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.685 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:22.685 06:34:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:22.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:22.685 00:09:22.685 --- 10.0.0.1 ping statistics --- 00:09:22.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.685 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:22.685 06:34:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.685 06:34:02 -- nvmf/common.sh@421 -- # return 0 00:09:22.685 06:34:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:22.685 06:34:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.685 06:34:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:22.685 06:34:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:22.685 06:34:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.685 06:34:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:22.685 06:34:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:22.685 06:34:02 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:09:22.685 06:34:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:22.685 06:34:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:22.685 06:34:02 -- common/autotest_common.sh@10 -- # set +x 00:09:22.685 06:34:02 -- nvmf/common.sh@469 -- # nvmfpid=72515 00:09:22.685 06:34:02 -- nvmf/common.sh@470 -- # waitforlisten 72515 00:09:22.685 06:34:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:22.685 06:34:02 -- common/autotest_common.sh@819 -- # '[' -z 72515 ']' 00:09:22.685 06:34:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.685 06:34:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:22.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.686 06:34:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.686 06:34:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:22.686 06:34:02 -- common/autotest_common.sh@10 -- # set +x 00:09:22.686 [2024-07-12 06:34:02.506516] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:22.686 [2024-07-12 06:34:02.507393] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.944 [2024-07-12 06:34:02.655443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.944 [2024-07-12 06:34:02.695730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:22.944 [2024-07-12 06:34:02.696107] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.944 [2024-07-12 06:34:02.696268] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.944 [2024-07-12 06:34:02.696430] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.944 [2024-07-12 06:34:02.696570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.889 06:34:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:23.889 06:34:03 -- common/autotest_common.sh@852 -- # return 0 00:09:23.889 06:34:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:23.889 06:34:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:23.889 06:34:03 -- common/autotest_common.sh@10 -- # set +x 00:09:23.889 06:34:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.889 06:34:03 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:24.147 [2024-07-12 06:34:03.814396] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.147 06:34:03 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:09:24.147 06:34:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:24.147 06:34:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.147 06:34:03 -- common/autotest_common.sh@10 -- # set +x 00:09:24.147 ************************************ 00:09:24.147 START TEST lvs_grow_clean 00:09:24.147 ************************************ 00:09:24.147 06:34:03 -- common/autotest_common.sh@1104 -- # lvs_grow 00:09:24.147 06:34:03 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:24.147 06:34:03 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:24.147 06:34:03 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:24.147 06:34:03 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:24.147 06:34:03 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:24.147 06:34:03 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:24.147 06:34:03 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:24.147 06:34:03 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:24.147 06:34:03 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.429 06:34:04 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:24.429 06:34:04 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:24.687 06:34:04 -- target/nvmf_lvs_grow.sh@28 -- # lvs=78d47eea-ca27-4244-a161-e740f0f933db 00:09:24.687 06:34:04 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:24.687 06:34:04 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78d47eea-ca27-4244-a161-e740f0f933db 00:09:24.946 06:34:04 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:24.946 06:34:04 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:24.946 06:34:04 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 78d47eea-ca27-4244-a161-e740f0f933db lvol 150 00:09:25.204 06:34:04 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9a977f62-7031-40a6-966b-540c71b2c406 00:09:25.204 06:34:04 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:25.204 06:34:04 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:25.462 [2024-07-12 06:34:05.231981] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:25.462 [2024-07-12 06:34:05.232115] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:25.462 true 00:09:25.462 06:34:05 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78d47eea-ca27-4244-a161-e740f0f933db 00:09:25.462 06:34:05 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:25.720 06:34:05 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:25.720 06:34:05 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:25.980 06:34:05 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9a977f62-7031-40a6-966b-540c71b2c406 00:09:26.239 06:34:05 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:26.497 [2024-07-12 06:34:06.240625] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.497 06:34:06 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.756 06:34:06 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72598 00:09:26.756 06:34:06 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:26.756 06:34:06 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:26.756 06:34:06 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72598 /var/tmp/bdevperf.sock 00:09:26.756 06:34:06 -- common/autotest_common.sh@819 -- # '[' -z 72598 ']' 00:09:26.756 06:34:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:26.756 06:34:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:26.756 06:34:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:26.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:26.756 06:34:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:26.756 06:34:06 -- common/autotest_common.sh@10 -- # set +x 00:09:26.756 [2024-07-12 06:34:06.553995] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:26.756 [2024-07-12 06:34:06.554076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72598 ] 00:09:27.015 [2024-07-12 06:34:06.690536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.015 [2024-07-12 06:34:06.736938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.953 06:34:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:27.953 06:34:07 -- common/autotest_common.sh@852 -- # return 0 00:09:27.953 06:34:07 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:27.953 Nvme0n1 00:09:27.953 06:34:07 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:28.520 [ 00:09:28.520 { 00:09:28.520 "name": "Nvme0n1", 00:09:28.520 "aliases": [ 00:09:28.520 "9a977f62-7031-40a6-966b-540c71b2c406" 00:09:28.520 ], 00:09:28.520 "product_name": "NVMe disk", 00:09:28.520 "block_size": 4096, 00:09:28.520 "num_blocks": 38912, 00:09:28.520 "uuid": "9a977f62-7031-40a6-966b-540c71b2c406", 00:09:28.520 "assigned_rate_limits": { 00:09:28.520 "rw_ios_per_sec": 0, 00:09:28.520 "rw_mbytes_per_sec": 0, 00:09:28.520 "r_mbytes_per_sec": 0, 00:09:28.520 "w_mbytes_per_sec": 0 00:09:28.520 }, 00:09:28.520 "claimed": false, 00:09:28.520 "zoned": false, 00:09:28.520 "supported_io_types": { 00:09:28.520 "read": true, 00:09:28.520 "write": true, 00:09:28.520 "unmap": true, 00:09:28.520 "write_zeroes": true, 00:09:28.520 "flush": true, 00:09:28.520 "reset": true, 00:09:28.520 "compare": true, 00:09:28.520 "compare_and_write": true, 00:09:28.520 "abort": true, 00:09:28.520 "nvme_admin": true, 00:09:28.520 "nvme_io": true 00:09:28.520 }, 00:09:28.520 "driver_specific": { 00:09:28.520 "nvme": [ 00:09:28.520 { 00:09:28.520 "trid": { 00:09:28.520 "trtype": "TCP", 00:09:28.520 "adrfam": "IPv4", 00:09:28.520 "traddr": "10.0.0.2", 00:09:28.520 "trsvcid": "4420", 00:09:28.520 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:28.520 }, 00:09:28.520 "ctrlr_data": { 00:09:28.520 "cntlid": 1, 00:09:28.520 "vendor_id": "0x8086", 00:09:28.520 "model_number": "SPDK bdev Controller", 00:09:28.520 "serial_number": "SPDK0", 00:09:28.520 "firmware_revision": "24.01.1", 00:09:28.520 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:28.520 "oacs": { 00:09:28.520 "security": 0, 00:09:28.520 "format": 0, 00:09:28.520 "firmware": 0, 00:09:28.520 "ns_manage": 0 00:09:28.520 }, 00:09:28.520 "multi_ctrlr": true, 00:09:28.520 "ana_reporting": false 00:09:28.520 }, 00:09:28.520 "vs": { 00:09:28.520 "nvme_version": "1.3" 00:09:28.520 }, 00:09:28.520 "ns_data": { 00:09:28.520 "id": 1, 00:09:28.520 "can_share": true 00:09:28.520 } 00:09:28.520 } 00:09:28.520 ], 00:09:28.520 "mp_policy": "active_passive" 00:09:28.520 } 00:09:28.520 } 00:09:28.520 ] 00:09:28.521 06:34:08 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72627 00:09:28.521 06:34:08 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:28.521 06:34:08 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:28.521 Running I/O for 10 seconds... 00:09:29.457 Latency(us) 00:09:29.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.457 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:09:29.457 =================================================================================================================== 00:09:29.457 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:09:29.457 00:09:30.392 06:34:10 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 78d47eea-ca27-4244-a161-e740f0f933db 00:09:30.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.651 Nvme0n1 : 2.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:09:30.651 =================================================================================================================== 00:09:30.651 Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:09:30.651 00:09:30.651 true 00:09:30.651 06:34:10 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78d47eea-ca27-4244-a161-e740f0f933db 00:09:30.651 06:34:10 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:30.909 06:34:10 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:30.909 06:34:10 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:30.909 06:34:10 -- target/nvmf_lvs_grow.sh@65 -- # wait 72627 00:09:31.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.476 Nvme0n1 : 3.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:09:31.476 =================================================================================================================== 00:09:31.476 Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:09:31.476 00:09:32.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.410 Nvme0n1 : 4.00 7397.75 28.90 0.00 0.00 0.00 0.00 0.00 00:09:32.410 =================================================================================================================== 00:09:32.410 Total : 7397.75 28.90 0.00 0.00 0.00 0.00 0.00 00:09:32.410 00:09:33.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.785 Nvme0n1 : 5.00 7391.40 28.87 0.00 0.00 0.00 0.00 0.00 00:09:33.785 =================================================================================================================== 00:09:33.785 Total : 7391.40 28.87 0.00 0.00 0.00 0.00 0.00 00:09:33.785 00:09:34.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.719 Nvme0n1 : 6.00 7387.17 28.86 0.00 0.00 0.00 0.00 0.00 00:09:34.719 =================================================================================================================== 00:09:34.719 Total : 7387.17 28.86 0.00 0.00 0.00 0.00 0.00 00:09:34.719 00:09:35.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.653 Nvme0n1 : 7.00 7329.71 28.63 0.00 0.00 0.00 0.00 0.00 00:09:35.653 =================================================================================================================== 00:09:35.653 Total : 7329.71 28.63 0.00 0.00 0.00 0.00 0.00 00:09:35.653 00:09:36.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.591 Nvme0n1 : 8.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:09:36.591 =================================================================================================================== 00:09:36.591 Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:09:36.591 00:09:37.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.533 Nvme0n1 : 9.00 7295.44 28.50 0.00 0.00 0.00 0.00 0.00 00:09:37.533 =================================================================================================================== 00:09:37.533 Total : 7295.44 28.50 0.00 0.00 0.00 0.00 0.00 00:09:37.533 00:09:38.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.468 Nvme0n1 : 10.00 7277.10 28.43 0.00 0.00 0.00 0.00 0.00 00:09:38.468 =================================================================================================================== 00:09:38.468 Total : 7277.10 28.43 0.00 0.00 0.00 0.00 0.00 00:09:38.468 00:09:38.468 00:09:38.468 Latency(us) 00:09:38.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.468 Nvme0n1 : 10.02 7277.04 28.43 0.00 0.00 17583.95 13405.09 42419.67 00:09:38.468 =================================================================================================================== 00:09:38.468 Total : 7277.04 28.43 0.00 0.00 17583.95 13405.09 42419.67 00:09:38.468 0 00:09:38.468 06:34:18 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72598 00:09:38.468 06:34:18 -- common/autotest_common.sh@926 -- # '[' -z 72598 ']' 00:09:38.468 06:34:18 -- common/autotest_common.sh@930 -- # kill -0 72598 00:09:38.468 06:34:18 -- common/autotest_common.sh@931 -- # uname 00:09:38.468 06:34:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:38.468 06:34:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72598 00:09:38.468 killing process with pid 72598 00:09:38.468 Received shutdown signal, test time was about 10.000000 seconds 00:09:38.468 00:09:38.468 Latency(us) 00:09:38.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.468 =================================================================================================================== 00:09:38.468 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:38.468 06:34:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:38.468 06:34:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:38.468 06:34:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72598' 00:09:38.468 06:34:18 -- common/autotest_common.sh@945 -- # kill 72598 00:09:38.468 06:34:18 -- common/autotest_common.sh@950 -- # wait 72598 00:09:38.727 06:34:18 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:38.985 06:34:18 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78d47eea-ca27-4244-a161-e740f0f933db 00:09:38.985 06:34:18 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:39.243 06:34:19 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:39.243 06:34:19 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:09:39.243 06:34:19 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:39.502 [2024-07-12 06:34:19.306786] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:39.502 06:34:19 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78d47eea-ca27-4244-a161-e740f0f933db 00:09:39.502 06:34:19 -- common/autotest_common.sh@640 -- # local es=0 00:09:39.502 06:34:19 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78d47eea-ca27-4244-a161-e740f0f933db 00:09:39.502 06:34:19 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.502 06:34:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:39.502 06:34:19 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.502 06:34:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:39.502 06:34:19 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.502 06:34:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:39.502 06:34:19 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.502 06:34:19 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:39.502 06:34:19 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78d47eea-ca27-4244-a161-e740f0f933db 00:09:39.761 request: 00:09:39.761 { 00:09:39.761 "uuid": "78d47eea-ca27-4244-a161-e740f0f933db", 00:09:39.761 "method": "bdev_lvol_get_lvstores", 00:09:39.761 "req_id": 1 00:09:39.761 } 00:09:39.761 Got JSON-RPC error response 00:09:39.761 response: 00:09:39.761 { 00:09:39.761 "code": -19, 00:09:39.761 "message": "No such device" 00:09:39.761 } 00:09:39.761 06:34:19 -- common/autotest_common.sh@643 -- # es=1 00:09:39.761 06:34:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:39.761 06:34:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:39.761 06:34:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:39.761 06:34:19 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:40.019 aio_bdev 00:09:40.019 06:34:19 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9a977f62-7031-40a6-966b-540c71b2c406 00:09:40.019 06:34:19 -- common/autotest_common.sh@887 -- # local bdev_name=9a977f62-7031-40a6-966b-540c71b2c406 00:09:40.019 06:34:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:40.019 06:34:19 -- common/autotest_common.sh@889 -- # local i 00:09:40.019 06:34:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:40.019 06:34:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:40.019 06:34:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:40.277 06:34:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a977f62-7031-40a6-966b-540c71b2c406 -t 2000 00:09:40.535 [ 00:09:40.535 { 00:09:40.535 "name": "9a977f62-7031-40a6-966b-540c71b2c406", 00:09:40.535 "aliases": [ 00:09:40.535 "lvs/lvol" 00:09:40.535 ], 00:09:40.535 "product_name": "Logical Volume", 00:09:40.535 "block_size": 4096, 00:09:40.535 "num_blocks": 38912, 00:09:40.535 "uuid": "9a977f62-7031-40a6-966b-540c71b2c406", 00:09:40.535 "assigned_rate_limits": { 00:09:40.535 "rw_ios_per_sec": 0, 00:09:40.535 "rw_mbytes_per_sec": 0, 00:09:40.535 "r_mbytes_per_sec": 0, 00:09:40.535 "w_mbytes_per_sec": 0 00:09:40.535 }, 00:09:40.535 "claimed": false, 00:09:40.535 "zoned": false, 00:09:40.535 "supported_io_types": { 00:09:40.535 "read": true, 00:09:40.535 "write": true, 00:09:40.535 "unmap": true, 00:09:40.535 "write_zeroes": true, 00:09:40.535 "flush": false, 00:09:40.535 "reset": true, 00:09:40.535 "compare": false, 00:09:40.535 "compare_and_write": false, 00:09:40.535 "abort": false, 00:09:40.535 "nvme_admin": false, 00:09:40.535 "nvme_io": false 00:09:40.535 }, 00:09:40.535 "driver_specific": { 00:09:40.535 "lvol": { 00:09:40.535 "lvol_store_uuid": "78d47eea-ca27-4244-a161-e740f0f933db", 00:09:40.535 "base_bdev": "aio_bdev", 00:09:40.535 "thin_provision": false, 00:09:40.535 "snapshot": false, 00:09:40.535 "clone": false, 00:09:40.535 "esnap_clone": false 00:09:40.535 } 00:09:40.535 } 00:09:40.535 } 00:09:40.535 ] 00:09:40.535 06:34:20 -- common/autotest_common.sh@895 -- # return 0 00:09:40.535 06:34:20 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78d47eea-ca27-4244-a161-e740f0f933db 00:09:40.535 06:34:20 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:40.793 06:34:20 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:40.793 06:34:20 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:40.793 06:34:20 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78d47eea-ca27-4244-a161-e740f0f933db 00:09:41.052 06:34:20 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:41.052 06:34:20 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9a977f62-7031-40a6-966b-540c71b2c406 00:09:41.310 06:34:21 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 78d47eea-ca27-4244-a161-e740f0f933db 00:09:41.569 06:34:21 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:41.827 06:34:21 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:42.085 ************************************ 00:09:42.085 END TEST lvs_grow_clean 00:09:42.085 ************************************ 00:09:42.085 00:09:42.085 real 0m18.049s 00:09:42.085 user 0m17.253s 00:09:42.085 sys 0m2.374s 00:09:42.085 06:34:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.085 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:09:42.085 06:34:21 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:42.085 06:34:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:42.085 06:34:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:42.085 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:09:42.085 ************************************ 00:09:42.085 START TEST lvs_grow_dirty 00:09:42.085 ************************************ 00:09:42.085 06:34:21 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:09:42.085 06:34:21 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:42.085 06:34:21 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:42.085 06:34:21 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:42.085 06:34:21 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:42.085 06:34:21 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:42.085 06:34:21 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:42.085 06:34:21 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:42.085 06:34:21 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:42.085 06:34:21 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:42.344 06:34:22 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:42.344 06:34:22 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:42.603 06:34:22 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e7d597d3-63f8-4e64-969c-fa1dfb928007 00:09:42.603 06:34:22 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:42.603 06:34:22 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:09:42.861 06:34:22 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:42.861 06:34:22 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:42.861 06:34:22 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e7d597d3-63f8-4e64-969c-fa1dfb928007 lvol 150 00:09:43.120 06:34:23 -- target/nvmf_lvs_grow.sh@33 -- # lvol=282568f0-bf66-4781-b144-e20e7073ab9f 00:09:43.120 06:34:23 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:43.120 06:34:23 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:43.377 [2024-07-12 06:34:23.226880] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:43.377 [2024-07-12 06:34:23.227025] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:43.377 true 00:09:43.377 06:34:23 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:09:43.377 06:34:23 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:43.646 06:34:23 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:43.646 06:34:23 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:43.918 06:34:23 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 282568f0-bf66-4781-b144-e20e7073ab9f 00:09:44.177 06:34:24 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:44.435 06:34:24 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:44.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:44.694 06:34:24 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72866 00:09:44.694 06:34:24 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:44.694 06:34:24 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:44.694 06:34:24 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72866 /var/tmp/bdevperf.sock 00:09:44.694 06:34:24 -- common/autotest_common.sh@819 -- # '[' -z 72866 ']' 00:09:44.694 06:34:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:44.694 06:34:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:44.694 06:34:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:44.694 06:34:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:44.694 06:34:24 -- common/autotest_common.sh@10 -- # set +x 00:09:44.694 [2024-07-12 06:34:24.551251] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:44.694 [2024-07-12 06:34:24.551542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72866 ] 00:09:44.952 [2024-07-12 06:34:24.693949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.952 [2024-07-12 06:34:24.733458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.886 06:34:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:45.886 06:34:25 -- common/autotest_common.sh@852 -- # return 0 00:09:45.886 06:34:25 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:45.886 Nvme0n1 00:09:45.886 06:34:25 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:46.144 [ 00:09:46.144 { 00:09:46.144 "name": "Nvme0n1", 00:09:46.144 "aliases": [ 00:09:46.144 "282568f0-bf66-4781-b144-e20e7073ab9f" 00:09:46.144 ], 00:09:46.144 "product_name": "NVMe disk", 00:09:46.144 "block_size": 4096, 00:09:46.144 "num_blocks": 38912, 00:09:46.144 "uuid": "282568f0-bf66-4781-b144-e20e7073ab9f", 00:09:46.144 "assigned_rate_limits": { 00:09:46.144 "rw_ios_per_sec": 0, 00:09:46.144 "rw_mbytes_per_sec": 0, 00:09:46.144 "r_mbytes_per_sec": 0, 00:09:46.144 "w_mbytes_per_sec": 0 00:09:46.144 }, 00:09:46.144 "claimed": false, 00:09:46.144 "zoned": false, 00:09:46.144 "supported_io_types": { 00:09:46.144 "read": true, 00:09:46.144 "write": true, 00:09:46.144 "unmap": true, 00:09:46.144 "write_zeroes": true, 00:09:46.144 "flush": true, 00:09:46.144 "reset": true, 00:09:46.144 "compare": true, 00:09:46.144 "compare_and_write": true, 00:09:46.144 "abort": true, 00:09:46.144 "nvme_admin": true, 00:09:46.144 "nvme_io": true 00:09:46.144 }, 00:09:46.144 "driver_specific": { 00:09:46.144 "nvme": [ 00:09:46.144 { 00:09:46.144 "trid": { 00:09:46.144 "trtype": "TCP", 00:09:46.144 "adrfam": "IPv4", 00:09:46.144 "traddr": "10.0.0.2", 00:09:46.144 "trsvcid": "4420", 00:09:46.144 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:46.144 }, 00:09:46.144 "ctrlr_data": { 00:09:46.144 "cntlid": 1, 00:09:46.144 "vendor_id": "0x8086", 00:09:46.144 "model_number": "SPDK bdev Controller", 00:09:46.144 "serial_number": "SPDK0", 00:09:46.144 "firmware_revision": "24.01.1", 00:09:46.144 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:46.144 "oacs": { 00:09:46.144 "security": 0, 00:09:46.144 "format": 0, 00:09:46.144 "firmware": 0, 00:09:46.144 "ns_manage": 0 00:09:46.144 }, 00:09:46.144 "multi_ctrlr": true, 00:09:46.144 "ana_reporting": false 00:09:46.144 }, 00:09:46.144 "vs": { 00:09:46.144 "nvme_version": "1.3" 00:09:46.144 }, 00:09:46.144 "ns_data": { 00:09:46.144 "id": 1, 00:09:46.144 "can_share": true 00:09:46.144 } 00:09:46.144 } 00:09:46.144 ], 00:09:46.144 "mp_policy": "active_passive" 00:09:46.144 } 00:09:46.144 } 00:09:46.144 ] 00:09:46.144 06:34:26 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72889 00:09:46.144 06:34:26 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:46.144 06:34:26 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:46.403 Running I/O for 10 seconds... 00:09:47.335 Latency(us) 00:09:47.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.335 Nvme0n1 : 1.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:09:47.335 =================================================================================================================== 00:09:47.335 Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:09:47.335 00:09:48.266 06:34:28 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:09:48.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.266 Nvme0n1 : 2.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:09:48.266 =================================================================================================================== 00:09:48.266 Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:09:48.266 00:09:48.524 true 00:09:48.524 06:34:28 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:48.524 06:34:28 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:09:48.782 06:34:28 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:48.782 06:34:28 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:48.782 06:34:28 -- target/nvmf_lvs_grow.sh@65 -- # wait 72889 00:09:49.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.348 Nvme0n1 : 3.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:09:49.348 =================================================================================================================== 00:09:49.348 Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:09:49.348 00:09:50.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.299 Nvme0n1 : 4.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:09:50.299 =================================================================================================================== 00:09:50.299 Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:09:50.299 00:09:51.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.260 Nvme0n1 : 5.00 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:09:51.260 =================================================================================================================== 00:09:51.260 Total : 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:09:51.260 00:09:52.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.634 Nvme0n1 : 6.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:09:52.634 =================================================================================================================== 00:09:52.634 Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:09:52.634 00:09:53.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.565 Nvme0n1 : 7.00 7257.14 28.35 0.00 0.00 0.00 0.00 0.00 00:09:53.565 =================================================================================================================== 00:09:53.565 Total : 7257.14 28.35 0.00 0.00 0.00 0.00 0.00 00:09:53.565 00:09:54.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.500 Nvme0n1 : 8.00 6992.62 27.31 0.00 0.00 0.00 0.00 0.00 00:09:54.500 =================================================================================================================== 00:09:54.500 Total : 6992.62 27.31 0.00 0.00 0.00 0.00 0.00 00:09:54.500 00:09:55.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.437 Nvme0n1 : 9.00 6949.44 27.15 0.00 0.00 0.00 0.00 0.00 00:09:55.437 =================================================================================================================== 00:09:55.437 Total : 6949.44 27.15 0.00 0.00 0.00 0.00 0.00 00:09:55.437 00:09:56.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.373 Nvme0n1 : 10.00 6864.10 26.81 0.00 0.00 0.00 0.00 0.00 00:09:56.373 =================================================================================================================== 00:09:56.373 Total : 6864.10 26.81 0.00 0.00 0.00 0.00 0.00 00:09:56.373 00:09:56.373 00:09:56.373 Latency(us) 00:09:56.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.373 Nvme0n1 : 10.01 6871.52 26.84 0.00 0.00 18622.98 11319.85 253564.74 00:09:56.373 =================================================================================================================== 00:09:56.373 Total : 6871.52 26.84 0.00 0.00 18622.98 11319.85 253564.74 00:09:56.373 0 00:09:56.373 06:34:36 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72866 00:09:56.373 06:34:36 -- common/autotest_common.sh@926 -- # '[' -z 72866 ']' 00:09:56.373 06:34:36 -- common/autotest_common.sh@930 -- # kill -0 72866 00:09:56.373 06:34:36 -- common/autotest_common.sh@931 -- # uname 00:09:56.373 06:34:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:56.373 06:34:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72866 00:09:56.373 killing process with pid 72866 00:09:56.373 Received shutdown signal, test time was about 10.000000 seconds 00:09:56.373 00:09:56.373 Latency(us) 00:09:56.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.373 =================================================================================================================== 00:09:56.373 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:56.373 06:34:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:56.373 06:34:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:56.373 06:34:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72866' 00:09:56.373 06:34:36 -- common/autotest_common.sh@945 -- # kill 72866 00:09:56.373 06:34:36 -- common/autotest_common.sh@950 -- # wait 72866 00:09:56.632 06:34:36 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:56.891 06:34:36 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:09:56.891 06:34:36 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:57.150 06:34:36 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:57.150 06:34:36 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:09:57.150 06:34:36 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72515 00:09:57.150 06:34:36 -- target/nvmf_lvs_grow.sh@74 -- # wait 72515 00:09:57.150 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72515 Killed "${NVMF_APP[@]}" "$@" 00:09:57.150 06:34:36 -- target/nvmf_lvs_grow.sh@74 -- # true 00:09:57.150 06:34:36 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:09:57.150 06:34:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:57.150 06:34:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:57.150 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:57.150 06:34:36 -- nvmf/common.sh@469 -- # nvmfpid=73021 00:09:57.150 06:34:36 -- nvmf/common.sh@470 -- # waitforlisten 73021 00:09:57.150 06:34:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:57.150 06:34:36 -- common/autotest_common.sh@819 -- # '[' -z 73021 ']' 00:09:57.150 06:34:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.150 06:34:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:57.150 06:34:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.150 06:34:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:57.150 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:57.150 [2024-07-12 06:34:36.981091] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:57.150 [2024-07-12 06:34:36.981192] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.409 [2024-07-12 06:34:37.125484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.409 [2024-07-12 06:34:37.162221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.409 [2024-07-12 06:34:37.162420] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.409 [2024-07-12 06:34:37.162447] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.409 [2024-07-12 06:34:37.162465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.409 [2024-07-12 06:34:37.162512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.350 06:34:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:58.350 06:34:37 -- common/autotest_common.sh@852 -- # return 0 00:09:58.350 06:34:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:58.350 06:34:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:58.350 06:34:37 -- common/autotest_common.sh@10 -- # set +x 00:09:58.350 06:34:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.350 06:34:38 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:58.608 [2024-07-12 06:34:38.320458] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:58.608 [2024-07-12 06:34:38.320768] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:58.608 [2024-07-12 06:34:38.321116] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:58.608 06:34:38 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:09:58.608 06:34:38 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 282568f0-bf66-4781-b144-e20e7073ab9f 00:09:58.608 06:34:38 -- common/autotest_common.sh@887 -- # local bdev_name=282568f0-bf66-4781-b144-e20e7073ab9f 00:09:58.608 06:34:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:58.608 06:34:38 -- common/autotest_common.sh@889 -- # local i 00:09:58.608 06:34:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:58.608 06:34:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:58.608 06:34:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:58.866 06:34:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 282568f0-bf66-4781-b144-e20e7073ab9f -t 2000 00:09:59.124 [ 00:09:59.124 { 00:09:59.124 "name": "282568f0-bf66-4781-b144-e20e7073ab9f", 00:09:59.124 "aliases": [ 00:09:59.124 "lvs/lvol" 00:09:59.124 ], 00:09:59.124 "product_name": "Logical Volume", 00:09:59.124 "block_size": 4096, 00:09:59.124 "num_blocks": 38912, 00:09:59.124 "uuid": "282568f0-bf66-4781-b144-e20e7073ab9f", 00:09:59.124 "assigned_rate_limits": { 00:09:59.124 "rw_ios_per_sec": 0, 00:09:59.124 "rw_mbytes_per_sec": 0, 00:09:59.124 "r_mbytes_per_sec": 0, 00:09:59.124 "w_mbytes_per_sec": 0 00:09:59.124 }, 00:09:59.124 "claimed": false, 00:09:59.124 "zoned": false, 00:09:59.124 "supported_io_types": { 00:09:59.124 "read": true, 00:09:59.124 "write": true, 00:09:59.124 "unmap": true, 00:09:59.124 "write_zeroes": true, 00:09:59.124 "flush": false, 00:09:59.124 "reset": true, 00:09:59.124 "compare": false, 00:09:59.124 "compare_and_write": false, 00:09:59.124 "abort": false, 00:09:59.124 "nvme_admin": false, 00:09:59.124 "nvme_io": false 00:09:59.124 }, 00:09:59.124 "driver_specific": { 00:09:59.124 "lvol": { 00:09:59.124 "lvol_store_uuid": "e7d597d3-63f8-4e64-969c-fa1dfb928007", 00:09:59.124 "base_bdev": "aio_bdev", 00:09:59.124 "thin_provision": false, 00:09:59.124 "snapshot": false, 00:09:59.124 "clone": false, 00:09:59.124 "esnap_clone": false 00:09:59.124 } 00:09:59.124 } 00:09:59.124 } 00:09:59.124 ] 00:09:59.382 06:34:39 -- common/autotest_common.sh@895 -- # return 0 00:09:59.382 06:34:39 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:09:59.382 06:34:39 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:09:59.640 06:34:39 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:09:59.640 06:34:39 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:09:59.640 06:34:39 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:09:59.898 06:34:39 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:09:59.898 06:34:39 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:00.158 [2024-07-12 06:34:39.874309] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:00.158 06:34:39 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:10:00.158 06:34:39 -- common/autotest_common.sh@640 -- # local es=0 00:10:00.158 06:34:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:10:00.158 06:34:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.158 06:34:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:00.158 06:34:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.158 06:34:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:00.158 06:34:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.158 06:34:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:00.158 06:34:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.158 06:34:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:00.158 06:34:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:10:00.417 request: 00:10:00.417 { 00:10:00.417 "uuid": "e7d597d3-63f8-4e64-969c-fa1dfb928007", 00:10:00.417 "method": "bdev_lvol_get_lvstores", 00:10:00.417 "req_id": 1 00:10:00.417 } 00:10:00.417 Got JSON-RPC error response 00:10:00.417 response: 00:10:00.417 { 00:10:00.417 "code": -19, 00:10:00.417 "message": "No such device" 00:10:00.417 } 00:10:00.417 06:34:40 -- common/autotest_common.sh@643 -- # es=1 00:10:00.417 06:34:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:00.417 06:34:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:00.417 06:34:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:00.417 06:34:40 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:00.675 aio_bdev 00:10:00.675 06:34:40 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 282568f0-bf66-4781-b144-e20e7073ab9f 00:10:00.675 06:34:40 -- common/autotest_common.sh@887 -- # local bdev_name=282568f0-bf66-4781-b144-e20e7073ab9f 00:10:00.676 06:34:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:00.676 06:34:40 -- common/autotest_common.sh@889 -- # local i 00:10:00.676 06:34:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:00.676 06:34:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:00.676 06:34:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:00.934 06:34:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 282568f0-bf66-4781-b144-e20e7073ab9f -t 2000 00:10:01.192 [ 00:10:01.192 { 00:10:01.192 "name": "282568f0-bf66-4781-b144-e20e7073ab9f", 00:10:01.192 "aliases": [ 00:10:01.192 "lvs/lvol" 00:10:01.192 ], 00:10:01.192 "product_name": "Logical Volume", 00:10:01.192 "block_size": 4096, 00:10:01.192 "num_blocks": 38912, 00:10:01.192 "uuid": "282568f0-bf66-4781-b144-e20e7073ab9f", 00:10:01.192 "assigned_rate_limits": { 00:10:01.192 "rw_ios_per_sec": 0, 00:10:01.192 "rw_mbytes_per_sec": 0, 00:10:01.192 "r_mbytes_per_sec": 0, 00:10:01.192 "w_mbytes_per_sec": 0 00:10:01.192 }, 00:10:01.192 "claimed": false, 00:10:01.192 "zoned": false, 00:10:01.192 "supported_io_types": { 00:10:01.192 "read": true, 00:10:01.192 "write": true, 00:10:01.192 "unmap": true, 00:10:01.192 "write_zeroes": true, 00:10:01.192 "flush": false, 00:10:01.192 "reset": true, 00:10:01.192 "compare": false, 00:10:01.192 "compare_and_write": false, 00:10:01.192 "abort": false, 00:10:01.192 "nvme_admin": false, 00:10:01.192 "nvme_io": false 00:10:01.192 }, 00:10:01.192 "driver_specific": { 00:10:01.192 "lvol": { 00:10:01.192 "lvol_store_uuid": "e7d597d3-63f8-4e64-969c-fa1dfb928007", 00:10:01.192 "base_bdev": "aio_bdev", 00:10:01.192 "thin_provision": false, 00:10:01.192 "snapshot": false, 00:10:01.192 "clone": false, 00:10:01.192 "esnap_clone": false 00:10:01.192 } 00:10:01.192 } 00:10:01.192 } 00:10:01.192 ] 00:10:01.192 06:34:40 -- common/autotest_common.sh@895 -- # return 0 00:10:01.192 06:34:40 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:10:01.193 06:34:40 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:10:01.451 06:34:41 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:10:01.451 06:34:41 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:10:01.451 06:34:41 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:10:01.710 06:34:41 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:10:01.710 06:34:41 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 282568f0-bf66-4781-b144-e20e7073ab9f 00:10:01.970 06:34:41 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e7d597d3-63f8-4e64-969c-fa1dfb928007 00:10:02.228 06:34:41 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:02.487 06:34:42 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:02.745 ************************************ 00:10:02.745 END TEST lvs_grow_dirty 00:10:02.745 ************************************ 00:10:02.745 00:10:02.745 real 0m20.635s 00:10:02.745 user 0m42.860s 00:10:02.745 sys 0m7.810s 00:10:02.746 06:34:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.746 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.746 06:34:42 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:02.746 06:34:42 -- common/autotest_common.sh@796 -- # type=--id 00:10:02.746 06:34:42 -- common/autotest_common.sh@797 -- # id=0 00:10:02.746 06:34:42 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:10:02.746 06:34:42 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:02.746 06:34:42 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:10:02.746 06:34:42 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:10:02.746 06:34:42 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:10:02.746 06:34:42 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:02.746 nvmf_trace.0 00:10:02.746 06:34:42 -- common/autotest_common.sh@811 -- # return 0 00:10:02.746 06:34:42 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:02.746 06:34:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:02.746 06:34:42 -- nvmf/common.sh@116 -- # sync 00:10:03.004 06:34:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:03.004 06:34:42 -- nvmf/common.sh@119 -- # set +e 00:10:03.004 06:34:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:03.004 06:34:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:03.004 rmmod nvme_tcp 00:10:03.004 rmmod nvme_fabrics 00:10:03.273 rmmod nvme_keyring 00:10:03.273 06:34:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:03.273 06:34:42 -- nvmf/common.sh@123 -- # set -e 00:10:03.273 06:34:42 -- nvmf/common.sh@124 -- # return 0 00:10:03.273 06:34:42 -- nvmf/common.sh@477 -- # '[' -n 73021 ']' 00:10:03.273 06:34:42 -- nvmf/common.sh@478 -- # killprocess 73021 00:10:03.273 06:34:42 -- common/autotest_common.sh@926 -- # '[' -z 73021 ']' 00:10:03.273 06:34:42 -- common/autotest_common.sh@930 -- # kill -0 73021 00:10:03.273 06:34:42 -- common/autotest_common.sh@931 -- # uname 00:10:03.273 06:34:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:03.273 06:34:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73021 00:10:03.273 killing process with pid 73021 00:10:03.273 06:34:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:03.273 06:34:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:03.273 06:34:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73021' 00:10:03.273 06:34:42 -- common/autotest_common.sh@945 -- # kill 73021 00:10:03.273 06:34:42 -- common/autotest_common.sh@950 -- # wait 73021 00:10:03.273 06:34:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:03.273 06:34:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:03.273 06:34:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:03.273 06:34:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.273 06:34:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:03.273 06:34:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.273 06:34:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.273 06:34:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.273 06:34:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:03.273 ************************************ 00:10:03.273 END TEST nvmf_lvs_grow 00:10:03.273 ************************************ 00:10:03.273 00:10:03.273 real 0m41.204s 00:10:03.273 user 1m6.983s 00:10:03.273 sys 0m10.905s 00:10:03.273 06:34:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.273 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:10:03.531 06:34:43 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:03.531 06:34:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:03.531 06:34:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:03.531 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:10:03.531 ************************************ 00:10:03.531 START TEST nvmf_bdev_io_wait 00:10:03.531 ************************************ 00:10:03.531 06:34:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:03.531 * Looking for test storage... 00:10:03.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:03.531 06:34:43 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:03.531 06:34:43 -- nvmf/common.sh@7 -- # uname -s 00:10:03.531 06:34:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.532 06:34:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.532 06:34:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.532 06:34:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.532 06:34:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.532 06:34:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.532 06:34:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.532 06:34:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.532 06:34:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.532 06:34:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.532 06:34:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:10:03.532 06:34:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:10:03.532 06:34:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.532 06:34:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.532 06:34:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:03.532 06:34:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.532 06:34:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.532 06:34:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.532 06:34:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.532 06:34:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.532 06:34:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.532 06:34:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.532 06:34:43 -- paths/export.sh@5 -- # export PATH 00:10:03.532 06:34:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.532 06:34:43 -- nvmf/common.sh@46 -- # : 0 00:10:03.532 06:34:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:03.532 06:34:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:03.532 06:34:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:03.532 06:34:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.532 06:34:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.532 06:34:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:03.532 06:34:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:03.532 06:34:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:03.532 06:34:43 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.532 06:34:43 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.532 06:34:43 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:03.532 06:34:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:03.532 06:34:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.532 06:34:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:03.532 06:34:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:03.532 06:34:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:03.532 06:34:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.532 06:34:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.532 06:34:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.532 06:34:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:03.532 06:34:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:03.532 06:34:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:03.532 06:34:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:03.532 06:34:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:03.532 06:34:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:03.532 06:34:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.532 06:34:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.532 06:34:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:03.532 06:34:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:03.532 06:34:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:03.532 06:34:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:03.532 06:34:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:03.532 06:34:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.532 06:34:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:03.532 06:34:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:03.532 06:34:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:03.532 06:34:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:03.532 06:34:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:03.532 06:34:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:03.532 Cannot find device "nvmf_tgt_br" 00:10:03.532 06:34:43 -- nvmf/common.sh@154 -- # true 00:10:03.532 06:34:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.532 Cannot find device "nvmf_tgt_br2" 00:10:03.532 06:34:43 -- nvmf/common.sh@155 -- # true 00:10:03.532 06:34:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:03.532 06:34:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:03.532 Cannot find device "nvmf_tgt_br" 00:10:03.532 06:34:43 -- nvmf/common.sh@157 -- # true 00:10:03.532 06:34:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:03.532 Cannot find device "nvmf_tgt_br2" 00:10:03.532 06:34:43 -- nvmf/common.sh@158 -- # true 00:10:03.532 06:34:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:03.532 06:34:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:03.790 06:34:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.790 06:34:43 -- nvmf/common.sh@161 -- # true 00:10:03.790 06:34:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.790 06:34:43 -- nvmf/common.sh@162 -- # true 00:10:03.790 06:34:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:03.790 06:34:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:03.790 06:34:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:03.790 06:34:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:03.790 06:34:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:03.790 06:34:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:03.791 06:34:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:03.791 06:34:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:03.791 06:34:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:03.791 06:34:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:03.791 06:34:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:03.791 06:34:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:03.791 06:34:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:03.791 06:34:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:03.791 06:34:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:03.791 06:34:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:03.791 06:34:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:03.791 06:34:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:03.791 06:34:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:03.791 06:34:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:03.791 06:34:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:03.791 06:34:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:03.791 06:34:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:03.791 06:34:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:03.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:10:03.791 00:10:03.791 --- 10.0.0.2 ping statistics --- 00:10:03.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.791 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:03.791 06:34:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:03.791 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:03.791 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:10:03.791 00:10:03.791 --- 10.0.0.3 ping statistics --- 00:10:03.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.791 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:03.791 06:34:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:03.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:03.791 00:10:03.791 --- 10.0.0.1 ping statistics --- 00:10:03.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.791 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:03.791 06:34:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.791 06:34:43 -- nvmf/common.sh@421 -- # return 0 00:10:03.791 06:34:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:03.791 06:34:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.791 06:34:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:03.791 06:34:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:03.791 06:34:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.791 06:34:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:03.791 06:34:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:03.791 06:34:43 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:03.791 06:34:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:03.791 06:34:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:03.791 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:10:03.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.791 06:34:43 -- nvmf/common.sh@469 -- # nvmfpid=73332 00:10:03.791 06:34:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:03.791 06:34:43 -- nvmf/common.sh@470 -- # waitforlisten 73332 00:10:03.791 06:34:43 -- common/autotest_common.sh@819 -- # '[' -z 73332 ']' 00:10:03.791 06:34:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.791 06:34:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:03.791 06:34:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.791 06:34:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:03.791 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:10:04.049 [2024-07-12 06:34:43.724400] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:04.049 [2024-07-12 06:34:43.724487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.049 [2024-07-12 06:34:43.867628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.049 [2024-07-12 06:34:43.911598] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:04.049 [2024-07-12 06:34:43.912000] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.049 [2024-07-12 06:34:43.912445] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.049 [2024-07-12 06:34:43.912629] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.049 [2024-07-12 06:34:43.913025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.049 [2024-07-12 06:34:43.913219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.049 [2024-07-12 06:34:43.913758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.049 [2024-07-12 06:34:43.913767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.329 06:34:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:04.329 06:34:43 -- common/autotest_common.sh@852 -- # return 0 00:10:04.329 06:34:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:04.329 06:34:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:04.329 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:10:04.329 06:34:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:04.329 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:04.329 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:10:04.329 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:04.329 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:04.329 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:10:04.329 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.329 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:04.329 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:10:04.329 [2024-07-12 06:34:44.080878] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.329 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.329 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:04.329 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:10:04.329 Malloc0 00:10:04.329 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:04.329 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:04.329 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:10:04.329 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.329 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:04.329 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:10:04.329 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.329 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:04.329 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:10:04.329 [2024-07-12 06:34:44.142153] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.329 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73360 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@30 -- # READ_PID=73362 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:04.329 06:34:44 -- nvmf/common.sh@520 -- # config=() 00:10:04.329 06:34:44 -- nvmf/common.sh@520 -- # local subsystem config 00:10:04.329 06:34:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73364 00:10:04.329 06:34:44 -- nvmf/common.sh@520 -- # config=() 00:10:04.329 06:34:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:04.329 { 00:10:04.329 "params": { 00:10:04.329 "name": "Nvme$subsystem", 00:10:04.329 "trtype": "$TEST_TRANSPORT", 00:10:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.329 "adrfam": "ipv4", 00:10:04.329 "trsvcid": "$NVMF_PORT", 00:10:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.329 "hdgst": ${hdgst:-false}, 00:10:04.329 "ddgst": ${ddgst:-false} 00:10:04.329 }, 00:10:04.329 "method": "bdev_nvme_attach_controller" 00:10:04.329 } 00:10:04.329 EOF 00:10:04.329 )") 00:10:04.329 06:34:44 -- nvmf/common.sh@520 -- # local subsystem config 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73366 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@35 -- # sync 00:10:04.329 06:34:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:04.329 06:34:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:04.329 { 00:10:04.329 "params": { 00:10:04.329 "name": "Nvme$subsystem", 00:10:04.329 "trtype": "$TEST_TRANSPORT", 00:10:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.329 "adrfam": "ipv4", 00:10:04.329 "trsvcid": "$NVMF_PORT", 00:10:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.329 "hdgst": ${hdgst:-false}, 00:10:04.329 "ddgst": ${ddgst:-false} 00:10:04.329 }, 00:10:04.329 "method": "bdev_nvme_attach_controller" 00:10:04.329 } 00:10:04.329 EOF 00:10:04.329 )") 00:10:04.329 06:34:44 -- nvmf/common.sh@542 -- # cat 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:04.329 06:34:44 -- nvmf/common.sh@520 -- # config=() 00:10:04.329 06:34:44 -- nvmf/common.sh@542 -- # cat 00:10:04.329 06:34:44 -- nvmf/common.sh@520 -- # local subsystem config 00:10:04.329 06:34:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:04.329 06:34:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:04.329 { 00:10:04.329 "params": { 00:10:04.329 "name": "Nvme$subsystem", 00:10:04.329 "trtype": "$TEST_TRANSPORT", 00:10:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.329 "adrfam": "ipv4", 00:10:04.329 "trsvcid": "$NVMF_PORT", 00:10:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.329 "hdgst": ${hdgst:-false}, 00:10:04.329 "ddgst": ${ddgst:-false} 00:10:04.329 }, 00:10:04.329 "method": "bdev_nvme_attach_controller" 00:10:04.329 } 00:10:04.329 EOF 00:10:04.329 )") 00:10:04.329 06:34:44 -- nvmf/common.sh@542 -- # cat 00:10:04.329 06:34:44 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:04.329 06:34:44 -- nvmf/common.sh@520 -- # config=() 00:10:04.329 06:34:44 -- nvmf/common.sh@520 -- # local subsystem config 00:10:04.329 06:34:44 -- nvmf/common.sh@544 -- # jq . 00:10:04.329 06:34:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:04.329 06:34:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:04.329 { 00:10:04.329 "params": { 00:10:04.329 "name": "Nvme$subsystem", 00:10:04.329 "trtype": "$TEST_TRANSPORT", 00:10:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.329 "adrfam": "ipv4", 00:10:04.329 "trsvcid": "$NVMF_PORT", 00:10:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.329 "hdgst": ${hdgst:-false}, 00:10:04.329 "ddgst": ${ddgst:-false} 00:10:04.329 }, 00:10:04.329 "method": "bdev_nvme_attach_controller" 00:10:04.329 } 00:10:04.329 EOF 00:10:04.329 )") 00:10:04.330 06:34:44 -- nvmf/common.sh@544 -- # jq . 00:10:04.330 06:34:44 -- nvmf/common.sh@542 -- # cat 00:10:04.330 06:34:44 -- nvmf/common.sh@545 -- # IFS=, 00:10:04.330 06:34:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:04.330 "params": { 00:10:04.330 "name": "Nvme1", 00:10:04.330 "trtype": "tcp", 00:10:04.330 "traddr": "10.0.0.2", 00:10:04.330 "adrfam": "ipv4", 00:10:04.330 "trsvcid": "4420", 00:10:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.330 "hdgst": false, 00:10:04.330 "ddgst": false 00:10:04.330 }, 00:10:04.330 "method": "bdev_nvme_attach_controller" 00:10:04.330 }' 00:10:04.330 06:34:44 -- nvmf/common.sh@544 -- # jq . 00:10:04.330 06:34:44 -- nvmf/common.sh@545 -- # IFS=, 00:10:04.330 06:34:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:04.330 "params": { 00:10:04.330 "name": "Nvme1", 00:10:04.330 "trtype": "tcp", 00:10:04.330 "traddr": "10.0.0.2", 00:10:04.330 "adrfam": "ipv4", 00:10:04.330 "trsvcid": "4420", 00:10:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.330 "hdgst": false, 00:10:04.330 "ddgst": false 00:10:04.330 }, 00:10:04.330 "method": "bdev_nvme_attach_controller" 00:10:04.330 }' 00:10:04.330 06:34:44 -- nvmf/common.sh@545 -- # IFS=, 00:10:04.330 06:34:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:04.330 "params": { 00:10:04.330 "name": "Nvme1", 00:10:04.330 "trtype": "tcp", 00:10:04.330 "traddr": "10.0.0.2", 00:10:04.330 "adrfam": "ipv4", 00:10:04.330 "trsvcid": "4420", 00:10:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.330 "hdgst": false, 00:10:04.330 "ddgst": false 00:10:04.330 }, 00:10:04.330 "method": "bdev_nvme_attach_controller" 00:10:04.330 }' 00:10:04.330 06:34:44 -- nvmf/common.sh@544 -- # jq . 00:10:04.330 [2024-07-12 06:34:44.198949] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:04.330 [2024-07-12 06:34:44.199034] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:04.330 06:34:44 -- nvmf/common.sh@545 -- # IFS=, 00:10:04.330 06:34:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:04.330 "params": { 00:10:04.330 "name": "Nvme1", 00:10:04.330 "trtype": "tcp", 00:10:04.330 "traddr": "10.0.0.2", 00:10:04.330 "adrfam": "ipv4", 00:10:04.330 "trsvcid": "4420", 00:10:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.330 "hdgst": false, 00:10:04.330 "ddgst": false 00:10:04.330 }, 00:10:04.330 "method": "bdev_nvme_attach_controller" 00:10:04.330 }' 00:10:04.330 [2024-07-12 06:34:44.220171] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:04.330 [2024-07-12 06:34:44.220258] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:04.330 [2024-07-12 06:34:44.223993] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:04.330 [2024-07-12 06:34:44.224056] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:04.588 06:34:44 -- target/bdev_io_wait.sh@37 -- # wait 73360 00:10:04.588 [2024-07-12 06:34:44.251920] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:04.588 [2024-07-12 06:34:44.252032] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:04.588 [2024-07-12 06:34:44.376076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.588 [2024-07-12 06:34:44.397723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:04.588 [2024-07-12 06:34:44.418020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.588 [2024-07-12 06:34:44.442517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:04.588 [2024-07-12 06:34:44.464701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.588 [2024-07-12 06:34:44.490026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:04.846 [2024-07-12 06:34:44.508806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.846 Running I/O for 1 seconds... 00:10:04.846 [2024-07-12 06:34:44.532805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:10:04.846 Running I/O for 1 seconds... 00:10:04.846 Running I/O for 1 seconds... 00:10:04.846 Running I/O for 1 seconds... 00:10:05.781 00:10:05.781 Latency(us) 00:10:05.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.781 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:05.781 Nvme1n1 : 1.02 6183.36 24.15 0.00 0.00 20557.64 9353.77 39798.23 00:10:05.781 =================================================================================================================== 00:10:05.781 Total : 6183.36 24.15 0.00 0.00 20557.64 9353.77 39798.23 00:10:05.781 00:10:05.781 Latency(us) 00:10:05.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.781 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:05.781 Nvme1n1 : 1.00 165461.15 646.33 0.00 0.00 770.87 348.16 1213.91 00:10:05.781 =================================================================================================================== 00:10:05.781 Total : 165461.15 646.33 0.00 0.00 770.87 348.16 1213.91 00:10:05.781 00:10:05.781 Latency(us) 00:10:05.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.781 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:05.781 Nvme1n1 : 1.01 8743.25 34.15 0.00 0.00 14569.18 8519.68 27882.59 00:10:05.781 =================================================================================================================== 00:10:05.781 Total : 8743.25 34.15 0.00 0.00 14569.18 8519.68 27882.59 00:10:05.781 00:10:05.781 Latency(us) 00:10:05.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.781 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:05.781 Nvme1n1 : 1.00 6136.05 23.97 0.00 0.00 20799.34 5034.36 47900.86 00:10:05.781 =================================================================================================================== 00:10:05.781 Total : 6136.05 23.97 0.00 0.00 20799.34 5034.36 47900.86 00:10:06.040 06:34:45 -- target/bdev_io_wait.sh@38 -- # wait 73362 00:10:06.040 06:34:45 -- target/bdev_io_wait.sh@39 -- # wait 73364 00:10:06.040 06:34:45 -- target/bdev_io_wait.sh@40 -- # wait 73366 00:10:06.040 06:34:45 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.040 06:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:06.040 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:10:06.040 06:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:06.040 06:34:45 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:06.040 06:34:45 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:06.040 06:34:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:06.040 06:34:45 -- nvmf/common.sh@116 -- # sync 00:10:06.040 06:34:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:06.040 06:34:45 -- nvmf/common.sh@119 -- # set +e 00:10:06.040 06:34:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:06.040 06:34:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:06.040 rmmod nvme_tcp 00:10:06.040 rmmod nvme_fabrics 00:10:06.040 rmmod nvme_keyring 00:10:06.040 06:34:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:06.040 06:34:45 -- nvmf/common.sh@123 -- # set -e 00:10:06.040 06:34:45 -- nvmf/common.sh@124 -- # return 0 00:10:06.040 06:34:45 -- nvmf/common.sh@477 -- # '[' -n 73332 ']' 00:10:06.040 06:34:45 -- nvmf/common.sh@478 -- # killprocess 73332 00:10:06.040 06:34:45 -- common/autotest_common.sh@926 -- # '[' -z 73332 ']' 00:10:06.040 06:34:45 -- common/autotest_common.sh@930 -- # kill -0 73332 00:10:06.040 06:34:45 -- common/autotest_common.sh@931 -- # uname 00:10:06.040 06:34:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:06.040 06:34:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73332 00:10:06.040 06:34:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:06.040 06:34:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:06.040 killing process with pid 73332 00:10:06.040 06:34:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73332' 00:10:06.040 06:34:45 -- common/autotest_common.sh@945 -- # kill 73332 00:10:06.040 06:34:45 -- common/autotest_common.sh@950 -- # wait 73332 00:10:06.298 06:34:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:06.298 06:34:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:06.298 06:34:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:06.298 06:34:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.298 06:34:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:06.298 06:34:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.298 06:34:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.298 06:34:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.299 06:34:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:06.299 00:10:06.299 real 0m2.876s 00:10:06.299 user 0m12.668s 00:10:06.299 sys 0m1.844s 00:10:06.299 06:34:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.299 06:34:46 -- common/autotest_common.sh@10 -- # set +x 00:10:06.299 ************************************ 00:10:06.299 END TEST nvmf_bdev_io_wait 00:10:06.299 ************************************ 00:10:06.299 06:34:46 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:06.299 06:34:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:06.299 06:34:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:06.299 06:34:46 -- common/autotest_common.sh@10 -- # set +x 00:10:06.299 ************************************ 00:10:06.299 START TEST nvmf_queue_depth 00:10:06.299 ************************************ 00:10:06.299 06:34:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:06.299 * Looking for test storage... 00:10:06.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:06.299 06:34:46 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:06.299 06:34:46 -- nvmf/common.sh@7 -- # uname -s 00:10:06.558 06:34:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.558 06:34:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.558 06:34:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.558 06:34:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.558 06:34:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.558 06:34:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.558 06:34:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.558 06:34:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.558 06:34:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.558 06:34:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.558 06:34:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:10:06.558 06:34:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:10:06.558 06:34:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.558 06:34:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.558 06:34:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:06.558 06:34:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.558 06:34:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.558 06:34:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.559 06:34:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.559 06:34:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.559 06:34:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.559 06:34:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.559 06:34:46 -- paths/export.sh@5 -- # export PATH 00:10:06.559 06:34:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.559 06:34:46 -- nvmf/common.sh@46 -- # : 0 00:10:06.559 06:34:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:06.559 06:34:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:06.559 06:34:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:06.559 06:34:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.559 06:34:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.559 06:34:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:06.559 06:34:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:06.559 06:34:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:06.559 06:34:46 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:06.559 06:34:46 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:06.559 06:34:46 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:06.559 06:34:46 -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:06.559 06:34:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:06.559 06:34:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.559 06:34:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:06.559 06:34:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:06.559 06:34:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:06.559 06:34:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.559 06:34:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.559 06:34:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.559 06:34:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:06.559 06:34:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:06.559 06:34:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:06.559 06:34:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:06.559 06:34:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:06.559 06:34:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:06.559 06:34:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.559 06:34:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.559 06:34:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:06.559 06:34:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:06.559 06:34:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:06.559 06:34:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:06.559 06:34:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:06.559 06:34:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.559 06:34:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:06.559 06:34:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:06.559 06:34:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:06.559 06:34:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:06.559 06:34:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:06.559 06:34:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:06.559 Cannot find device "nvmf_tgt_br" 00:10:06.559 06:34:46 -- nvmf/common.sh@154 -- # true 00:10:06.559 06:34:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.559 Cannot find device "nvmf_tgt_br2" 00:10:06.559 06:34:46 -- nvmf/common.sh@155 -- # true 00:10:06.559 06:34:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:06.559 06:34:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:06.559 Cannot find device "nvmf_tgt_br" 00:10:06.559 06:34:46 -- nvmf/common.sh@157 -- # true 00:10:06.559 06:34:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:06.559 Cannot find device "nvmf_tgt_br2" 00:10:06.559 06:34:46 -- nvmf/common.sh@158 -- # true 00:10:06.559 06:34:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:06.559 06:34:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:06.559 06:34:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.559 06:34:46 -- nvmf/common.sh@161 -- # true 00:10:06.559 06:34:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.559 06:34:46 -- nvmf/common.sh@162 -- # true 00:10:06.559 06:34:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:06.559 06:34:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:06.559 06:34:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:06.559 06:34:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:06.559 06:34:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:06.559 06:34:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:06.559 06:34:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:06.559 06:34:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:06.819 06:34:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:06.819 06:34:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:06.819 06:34:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:06.819 06:34:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:06.819 06:34:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:06.819 06:34:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:06.819 06:34:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:06.819 06:34:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:06.819 06:34:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:06.819 06:34:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:06.819 06:34:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:06.819 06:34:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:06.819 06:34:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:06.819 06:34:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:06.819 06:34:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:06.819 06:34:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:06.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:10:06.819 00:10:06.819 --- 10.0.0.2 ping statistics --- 00:10:06.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.819 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:06.819 06:34:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:06.819 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:06.819 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:10:06.819 00:10:06.819 --- 10.0.0.3 ping statistics --- 00:10:06.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.819 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:06.819 06:34:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:06.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:06.819 00:10:06.819 --- 10.0.0.1 ping statistics --- 00:10:06.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.819 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:06.819 06:34:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.819 06:34:46 -- nvmf/common.sh@421 -- # return 0 00:10:06.819 06:34:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:06.819 06:34:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.819 06:34:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:06.819 06:34:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:06.819 06:34:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.819 06:34:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:06.819 06:34:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:06.819 06:34:46 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:06.819 06:34:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:06.819 06:34:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:06.819 06:34:46 -- common/autotest_common.sh@10 -- # set +x 00:10:06.819 06:34:46 -- nvmf/common.sh@469 -- # nvmfpid=73573 00:10:06.819 06:34:46 -- nvmf/common.sh@470 -- # waitforlisten 73573 00:10:06.819 06:34:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:06.819 06:34:46 -- common/autotest_common.sh@819 -- # '[' -z 73573 ']' 00:10:06.819 06:34:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.819 06:34:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:06.819 06:34:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.819 06:34:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:06.819 06:34:46 -- common/autotest_common.sh@10 -- # set +x 00:10:06.819 [2024-07-12 06:34:46.662842] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:06.819 [2024-07-12 06:34:46.662945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.078 [2024-07-12 06:34:46.802524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.078 [2024-07-12 06:34:46.838689] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:07.078 [2024-07-12 06:34:46.838834] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.078 [2024-07-12 06:34:46.838848] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.078 [2024-07-12 06:34:46.838857] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.078 [2024-07-12 06:34:46.838889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.014 06:34:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:08.014 06:34:47 -- common/autotest_common.sh@852 -- # return 0 00:10:08.014 06:34:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:08.014 06:34:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:08.014 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:10:08.014 06:34:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.014 06:34:47 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.014 06:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.014 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:10:08.014 [2024-07-12 06:34:47.708243] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.014 06:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.014 06:34:47 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:08.014 06:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.014 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:10:08.014 Malloc0 00:10:08.014 06:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.014 06:34:47 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:08.014 06:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.014 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:10:08.014 06:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.014 06:34:47 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.014 06:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.014 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:10:08.014 06:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.014 06:34:47 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.014 06:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.014 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:10:08.014 [2024-07-12 06:34:47.763449] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.014 06:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:08.014 06:34:47 -- target/queue_depth.sh@30 -- # bdevperf_pid=73605 00:10:08.014 06:34:47 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:08.014 06:34:47 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:08.014 06:34:47 -- target/queue_depth.sh@33 -- # waitforlisten 73605 /var/tmp/bdevperf.sock 00:10:08.014 06:34:47 -- common/autotest_common.sh@819 -- # '[' -z 73605 ']' 00:10:08.014 06:34:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:08.014 06:34:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:08.014 06:34:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:08.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:08.014 06:34:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:08.014 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:10:08.014 [2024-07-12 06:34:47.820792] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:08.014 [2024-07-12 06:34:47.820893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73605 ] 00:10:08.273 [2024-07-12 06:34:47.965561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.273 [2024-07-12 06:34:48.006467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.839 06:34:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:08.839 06:34:48 -- common/autotest_common.sh@852 -- # return 0 00:10:08.839 06:34:48 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:08.839 06:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:08.839 06:34:48 -- common/autotest_common.sh@10 -- # set +x 00:10:09.097 NVMe0n1 00:10:09.097 06:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:09.097 06:34:48 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:09.097 Running I/O for 10 seconds... 00:10:21.299 00:10:21.299 Latency(us) 00:10:21.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.299 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:21.299 Verification LBA range: start 0x0 length 0x4000 00:10:21.299 NVMe0n1 : 10.09 12470.34 48.71 0.00 0.00 81709.81 16562.73 151566.89 00:10:21.299 =================================================================================================================== 00:10:21.299 Total : 12470.34 48.71 0.00 0.00 81709.81 16562.73 151566.89 00:10:21.299 0 00:10:21.299 06:34:59 -- target/queue_depth.sh@39 -- # killprocess 73605 00:10:21.299 06:34:59 -- common/autotest_common.sh@926 -- # '[' -z 73605 ']' 00:10:21.299 06:34:59 -- common/autotest_common.sh@930 -- # kill -0 73605 00:10:21.299 06:34:59 -- common/autotest_common.sh@931 -- # uname 00:10:21.299 06:34:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:21.299 06:34:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73605 00:10:21.299 06:34:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:21.299 06:34:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:21.299 06:34:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73605' 00:10:21.299 killing process with pid 73605 00:10:21.299 06:34:59 -- common/autotest_common.sh@945 -- # kill 73605 00:10:21.299 Received shutdown signal, test time was about 10.000000 seconds 00:10:21.299 00:10:21.299 Latency(us) 00:10:21.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.299 =================================================================================================================== 00:10:21.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:21.299 06:34:59 -- common/autotest_common.sh@950 -- # wait 73605 00:10:21.299 06:34:59 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:21.299 06:34:59 -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:21.299 06:34:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:21.299 06:34:59 -- nvmf/common.sh@116 -- # sync 00:10:21.299 06:34:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:21.299 06:34:59 -- nvmf/common.sh@119 -- # set +e 00:10:21.299 06:34:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:21.299 06:34:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:21.299 rmmod nvme_tcp 00:10:21.299 rmmod nvme_fabrics 00:10:21.299 rmmod nvme_keyring 00:10:21.299 06:34:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:21.299 06:34:59 -- nvmf/common.sh@123 -- # set -e 00:10:21.299 06:34:59 -- nvmf/common.sh@124 -- # return 0 00:10:21.299 06:34:59 -- nvmf/common.sh@477 -- # '[' -n 73573 ']' 00:10:21.299 06:34:59 -- nvmf/common.sh@478 -- # killprocess 73573 00:10:21.299 06:34:59 -- common/autotest_common.sh@926 -- # '[' -z 73573 ']' 00:10:21.299 06:34:59 -- common/autotest_common.sh@930 -- # kill -0 73573 00:10:21.299 06:34:59 -- common/autotest_common.sh@931 -- # uname 00:10:21.299 06:34:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:21.299 06:34:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73573 00:10:21.299 06:34:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:10:21.299 06:34:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:10:21.299 06:34:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73573' 00:10:21.299 killing process with pid 73573 00:10:21.299 06:34:59 -- common/autotest_common.sh@945 -- # kill 73573 00:10:21.299 06:34:59 -- common/autotest_common.sh@950 -- # wait 73573 00:10:21.299 06:34:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:21.299 06:34:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:21.299 06:34:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:21.300 06:34:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.300 06:34:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:21.300 06:34:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.300 06:34:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.300 06:34:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.300 06:34:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:21.300 00:10:21.300 real 0m13.439s 00:10:21.300 user 0m23.355s 00:10:21.300 sys 0m1.997s 00:10:21.300 06:34:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.300 06:34:59 -- common/autotest_common.sh@10 -- # set +x 00:10:21.300 ************************************ 00:10:21.300 END TEST nvmf_queue_depth 00:10:21.300 ************************************ 00:10:21.300 06:34:59 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:21.300 06:34:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:21.300 06:34:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:21.300 06:34:59 -- common/autotest_common.sh@10 -- # set +x 00:10:21.300 ************************************ 00:10:21.300 START TEST nvmf_multipath 00:10:21.300 ************************************ 00:10:21.300 06:34:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:21.300 * Looking for test storage... 00:10:21.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:21.300 06:34:59 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.300 06:34:59 -- nvmf/common.sh@7 -- # uname -s 00:10:21.300 06:34:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.300 06:34:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.300 06:34:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.300 06:34:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.300 06:34:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.300 06:34:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.300 06:34:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.300 06:34:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.300 06:34:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.300 06:34:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.300 06:34:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:10:21.300 06:34:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:10:21.300 06:34:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.300 06:34:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.300 06:34:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.300 06:34:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.300 06:34:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.300 06:34:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.300 06:34:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.300 06:34:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.300 06:34:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.300 06:34:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.300 06:34:59 -- paths/export.sh@5 -- # export PATH 00:10:21.300 06:34:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.300 06:34:59 -- nvmf/common.sh@46 -- # : 0 00:10:21.300 06:34:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:21.300 06:34:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:21.300 06:34:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:21.300 06:34:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.300 06:34:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.300 06:34:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:21.300 06:34:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:21.300 06:34:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:21.300 06:34:59 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.300 06:34:59 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.300 06:34:59 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:21.300 06:34:59 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.300 06:34:59 -- target/multipath.sh@43 -- # nvmftestinit 00:10:21.300 06:34:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:21.300 06:34:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.300 06:34:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:21.300 06:34:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:21.300 06:34:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:21.300 06:34:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.300 06:34:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.300 06:34:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.300 06:34:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:21.300 06:34:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:21.300 06:34:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:21.300 06:34:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:21.300 06:34:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:21.300 06:34:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:21.300 06:34:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.300 06:34:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.300 06:34:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:21.300 06:34:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:21.300 06:34:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:21.300 06:34:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:21.300 06:34:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:21.300 06:34:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.300 06:34:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:21.300 06:34:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:21.300 06:34:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:21.300 06:34:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:21.300 06:34:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:21.300 06:34:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:21.300 Cannot find device "nvmf_tgt_br" 00:10:21.300 06:34:59 -- nvmf/common.sh@154 -- # true 00:10:21.300 06:34:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.300 Cannot find device "nvmf_tgt_br2" 00:10:21.300 06:34:59 -- nvmf/common.sh@155 -- # true 00:10:21.300 06:34:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:21.300 06:34:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:21.300 Cannot find device "nvmf_tgt_br" 00:10:21.300 06:34:59 -- nvmf/common.sh@157 -- # true 00:10:21.300 06:34:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:21.300 Cannot find device "nvmf_tgt_br2" 00:10:21.300 06:34:59 -- nvmf/common.sh@158 -- # true 00:10:21.301 06:34:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:21.301 06:34:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:21.301 06:34:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.301 06:34:59 -- nvmf/common.sh@161 -- # true 00:10:21.301 06:34:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.301 06:34:59 -- nvmf/common.sh@162 -- # true 00:10:21.301 06:34:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:21.301 06:34:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:21.301 06:34:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:21.301 06:34:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:21.301 06:34:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:21.301 06:34:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:21.301 06:34:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:21.301 06:34:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:21.301 06:34:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:21.301 06:34:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:21.301 06:34:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:21.301 06:34:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:21.301 06:34:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:21.301 06:34:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:21.301 06:34:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:21.301 06:34:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:21.301 06:35:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:21.301 06:35:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:21.301 06:35:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:21.301 06:35:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:21.301 06:35:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:21.301 06:35:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:21.301 06:35:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:21.301 06:35:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:21.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:10:21.301 00:10:21.301 --- 10.0.0.2 ping statistics --- 00:10:21.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.301 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:21.301 06:35:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:21.301 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:21.301 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:21.301 00:10:21.301 --- 10.0.0.3 ping statistics --- 00:10:21.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.301 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:21.301 06:35:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:21.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:10:21.301 00:10:21.301 --- 10.0.0.1 ping statistics --- 00:10:21.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.301 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:21.301 06:35:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.301 06:35:00 -- nvmf/common.sh@421 -- # return 0 00:10:21.301 06:35:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:21.301 06:35:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.301 06:35:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:21.301 06:35:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:21.301 06:35:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.301 06:35:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:21.301 06:35:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:21.301 06:35:00 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:21.301 06:35:00 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:21.301 06:35:00 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:21.301 06:35:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:21.301 06:35:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:21.301 06:35:00 -- common/autotest_common.sh@10 -- # set +x 00:10:21.301 06:35:00 -- nvmf/common.sh@469 -- # nvmfpid=73927 00:10:21.301 06:35:00 -- nvmf/common.sh@470 -- # waitforlisten 73927 00:10:21.301 06:35:00 -- common/autotest_common.sh@819 -- # '[' -z 73927 ']' 00:10:21.301 06:35:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.301 06:35:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.301 06:35:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:21.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.301 06:35:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.301 06:35:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:21.301 06:35:00 -- common/autotest_common.sh@10 -- # set +x 00:10:21.301 [2024-07-12 06:35:00.175297] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:21.301 [2024-07-12 06:35:00.175403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.301 [2024-07-12 06:35:00.317745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.301 [2024-07-12 06:35:00.362850] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:21.301 [2024-07-12 06:35:00.363052] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.301 [2024-07-12 06:35:00.363070] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.301 [2024-07-12 06:35:00.363081] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.301 [2024-07-12 06:35:00.363574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.301 [2024-07-12 06:35:00.363670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.301 [2024-07-12 06:35:00.363729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.301 [2024-07-12 06:35:00.363738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.301 06:35:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:21.301 06:35:01 -- common/autotest_common.sh@852 -- # return 0 00:10:21.301 06:35:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:21.301 06:35:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:21.301 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:10:21.560 06:35:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.560 06:35:01 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.818 [2024-07-12 06:35:01.569517] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.818 06:35:01 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:22.076 Malloc0 00:10:22.076 06:35:01 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:22.336 06:35:02 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.595 06:35:02 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.872 [2024-07-12 06:35:02.666192] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.872 06:35:02 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:23.138 [2024-07-12 06:35:02.914391] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:23.138 06:35:02 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:23.397 06:35:03 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:23.397 06:35:03 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.397 06:35:03 -- common/autotest_common.sh@1177 -- # local i=0 00:10:23.397 06:35:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.397 06:35:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:23.397 06:35:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:25.298 06:35:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:25.298 06:35:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.298 06:35:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:25.298 06:35:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:10:25.298 06:35:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.298 06:35:05 -- common/autotest_common.sh@1187 -- # return 0 00:10:25.298 06:35:05 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:25.298 06:35:05 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:25.298 06:35:05 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:25.298 06:35:05 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:25.298 06:35:05 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:25.298 06:35:05 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:25.557 06:35:05 -- target/multipath.sh@38 -- # return 0 00:10:25.557 06:35:05 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:25.557 06:35:05 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:25.557 06:35:05 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:25.557 06:35:05 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:25.557 06:35:05 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:25.557 06:35:05 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:25.557 06:35:05 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:25.557 06:35:05 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:25.557 06:35:05 -- target/multipath.sh@22 -- # local timeout=20 00:10:25.557 06:35:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:25.557 06:35:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:25.557 06:35:05 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:25.557 06:35:05 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:25.557 06:35:05 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:25.557 06:35:05 -- target/multipath.sh@22 -- # local timeout=20 00:10:25.557 06:35:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:25.557 06:35:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:25.557 06:35:05 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:25.557 06:35:05 -- target/multipath.sh@85 -- # echo numa 00:10:25.557 06:35:05 -- target/multipath.sh@88 -- # fio_pid=74022 00:10:25.557 06:35:05 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:25.557 06:35:05 -- target/multipath.sh@90 -- # sleep 1 00:10:25.557 [global] 00:10:25.557 thread=1 00:10:25.557 invalidate=1 00:10:25.557 rw=randrw 00:10:25.557 time_based=1 00:10:25.557 runtime=6 00:10:25.557 ioengine=libaio 00:10:25.557 direct=1 00:10:25.557 bs=4096 00:10:25.557 iodepth=128 00:10:25.557 norandommap=0 00:10:25.557 numjobs=1 00:10:25.557 00:10:25.557 verify_dump=1 00:10:25.557 verify_backlog=512 00:10:25.557 verify_state_save=0 00:10:25.557 do_verify=1 00:10:25.557 verify=crc32c-intel 00:10:25.557 [job0] 00:10:25.557 filename=/dev/nvme0n1 00:10:25.557 Could not set queue depth (nvme0n1) 00:10:25.557 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.557 fio-3.35 00:10:25.557 Starting 1 thread 00:10:26.492 06:35:06 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:26.751 06:35:06 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:27.010 06:35:06 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:27.010 06:35:06 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:27.010 06:35:06 -- target/multipath.sh@22 -- # local timeout=20 00:10:27.010 06:35:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:27.010 06:35:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:27.010 06:35:06 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:27.010 06:35:06 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:27.010 06:35:06 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:27.010 06:35:06 -- target/multipath.sh@22 -- # local timeout=20 00:10:27.010 06:35:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:27.010 06:35:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:27.010 06:35:06 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:27.010 06:35:06 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:27.268 06:35:07 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:27.526 06:35:07 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:27.526 06:35:07 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:27.526 06:35:07 -- target/multipath.sh@22 -- # local timeout=20 00:10:27.526 06:35:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:27.526 06:35:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:27.526 06:35:07 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:27.526 06:35:07 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:27.526 06:35:07 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:27.526 06:35:07 -- target/multipath.sh@22 -- # local timeout=20 00:10:27.526 06:35:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:27.526 06:35:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:27.526 06:35:07 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:27.526 06:35:07 -- target/multipath.sh@104 -- # wait 74022 00:10:31.713 00:10:31.713 job0: (groupid=0, jobs=1): err= 0: pid=74043: Fri Jul 12 06:35:11 2024 00:10:31.713 read: IOPS=10.9k, BW=42.6MiB/s (44.7MB/s)(256MiB/6007msec) 00:10:31.713 slat (usec): min=5, max=5672, avg=52.94, stdev=222.98 00:10:31.713 clat (usec): min=1586, max=13989, avg=7957.99, stdev=1443.69 00:10:31.713 lat (usec): min=1596, max=14008, avg=8010.93, stdev=1449.13 00:10:31.713 clat percentiles (usec): 00:10:31.713 | 1.00th=[ 4178], 5.00th=[ 5932], 10.00th=[ 6718], 20.00th=[ 7177], 00:10:31.713 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8029], 00:10:31.713 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[11469], 00:10:31.713 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13173], 99.95th=[13435], 00:10:31.713 | 99.99th=[13829] 00:10:31.713 bw ( KiB/s): min= 7744, max=28424, per=52.36%, avg=22865.33, stdev=5954.28, samples=12 00:10:31.713 iops : min= 1936, max= 7104, avg=5716.33, stdev=1488.45, samples=12 00:10:31.713 write: IOPS=6328, BW=24.7MiB/s (25.9MB/s)(134MiB/5425msec); 0 zone resets 00:10:31.713 slat (usec): min=15, max=2819, avg=63.76, stdev=153.57 00:10:31.713 clat (usec): min=932, max=13813, avg=6989.08, stdev=1247.56 00:10:31.713 lat (usec): min=976, max=13839, avg=7052.84, stdev=1252.54 00:10:31.713 clat percentiles (usec): 00:10:31.713 | 1.00th=[ 3228], 5.00th=[ 4178], 10.00th=[ 5342], 20.00th=[ 6521], 00:10:31.713 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7308], 00:10:31.713 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 8029], 95.00th=[ 8356], 00:10:31.713 | 99.00th=[10814], 99.50th=[11338], 99.90th=[12256], 99.95th=[12518], 00:10:31.713 | 99.99th=[13173] 00:10:31.713 bw ( KiB/s): min= 8192, max=27640, per=90.25%, avg=22846.67, stdev=5753.61, samples=12 00:10:31.713 iops : min= 2048, max= 6910, avg=5711.67, stdev=1438.40, samples=12 00:10:31.713 lat (usec) : 1000=0.01% 00:10:31.713 lat (msec) : 2=0.04%, 4=1.79%, 10=92.31%, 20=5.85% 00:10:31.713 cpu : usr=6.01%, sys=22.58%, ctx=5777, majf=0, minf=108 00:10:31.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:31.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.713 issued rwts: total=65578,34333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.713 00:10:31.713 Run status group 0 (all jobs): 00:10:31.713 READ: bw=42.6MiB/s (44.7MB/s), 42.6MiB/s-42.6MiB/s (44.7MB/s-44.7MB/s), io=256MiB (269MB), run=6007-6007msec 00:10:31.713 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=134MiB (141MB), run=5425-5425msec 00:10:31.713 00:10:31.713 Disk stats (read/write): 00:10:31.713 nvme0n1: ios=64635/33676, merge=0/0, ticks=490734/220065, in_queue=710799, util=98.65% 00:10:31.713 06:35:11 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:31.971 06:35:11 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:32.228 06:35:12 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:32.228 06:35:12 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:32.228 06:35:12 -- target/multipath.sh@22 -- # local timeout=20 00:10:32.228 06:35:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:32.228 06:35:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:32.228 06:35:12 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:32.228 06:35:12 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:32.228 06:35:12 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:32.228 06:35:12 -- target/multipath.sh@22 -- # local timeout=20 00:10:32.228 06:35:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:32.228 06:35:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:32.228 06:35:12 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:32.228 06:35:12 -- target/multipath.sh@113 -- # echo round-robin 00:10:32.228 06:35:12 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:32.228 06:35:12 -- target/multipath.sh@116 -- # fio_pid=74119 00:10:32.228 06:35:12 -- target/multipath.sh@118 -- # sleep 1 00:10:32.228 [global] 00:10:32.228 thread=1 00:10:32.228 invalidate=1 00:10:32.228 rw=randrw 00:10:32.228 time_based=1 00:10:32.228 runtime=6 00:10:32.228 ioengine=libaio 00:10:32.228 direct=1 00:10:32.228 bs=4096 00:10:32.228 iodepth=128 00:10:32.228 norandommap=0 00:10:32.228 numjobs=1 00:10:32.228 00:10:32.228 verify_dump=1 00:10:32.228 verify_backlog=512 00:10:32.228 verify_state_save=0 00:10:32.228 do_verify=1 00:10:32.228 verify=crc32c-intel 00:10:32.228 [job0] 00:10:32.228 filename=/dev/nvme0n1 00:10:32.228 Could not set queue depth (nvme0n1) 00:10:32.485 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.485 fio-3.35 00:10:32.485 Starting 1 thread 00:10:33.418 06:35:13 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:33.675 06:35:13 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:33.932 06:35:13 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:33.932 06:35:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:33.932 06:35:13 -- target/multipath.sh@22 -- # local timeout=20 00:10:33.933 06:35:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:33.933 06:35:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:33.933 06:35:13 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:33.933 06:35:13 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:33.933 06:35:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:33.933 06:35:13 -- target/multipath.sh@22 -- # local timeout=20 00:10:33.933 06:35:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:33.933 06:35:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:33.933 06:35:13 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:33.933 06:35:13 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:33.933 06:35:13 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:34.190 06:35:14 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:34.190 06:35:14 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:34.190 06:35:14 -- target/multipath.sh@22 -- # local timeout=20 00:10:34.190 06:35:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:34.190 06:35:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:34.190 06:35:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:34.190 06:35:14 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:34.190 06:35:14 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:34.190 06:35:14 -- target/multipath.sh@22 -- # local timeout=20 00:10:34.190 06:35:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:34.190 06:35:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:34.190 06:35:14 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:34.190 06:35:14 -- target/multipath.sh@132 -- # wait 74119 00:10:39.507 00:10:39.507 job0: (groupid=0, jobs=1): err= 0: pid=74146: Fri Jul 12 06:35:18 2024 00:10:39.507 read: IOPS=11.7k, BW=45.7MiB/s (48.0MB/s)(275MiB/6002msec) 00:10:39.507 slat (usec): min=3, max=10908, avg=41.27, stdev=198.08 00:10:39.507 clat (usec): min=432, max=24832, avg=7368.20, stdev=2098.24 00:10:39.507 lat (usec): min=443, max=24847, avg=7409.47, stdev=2113.91 00:10:39.507 clat percentiles (usec): 00:10:39.507 | 1.00th=[ 3228], 5.00th=[ 4015], 10.00th=[ 4752], 20.00th=[ 5669], 00:10:39.507 | 30.00th=[ 6587], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 7832], 00:10:39.507 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 9241], 95.00th=[10945], 00:10:39.507 | 99.00th=[13173], 99.50th=[15401], 99.90th=[22152], 99.95th=[23200], 00:10:39.507 | 99.99th=[24511] 00:10:39.507 bw ( KiB/s): min= 7088, max=38960, per=54.45%, avg=25499.64, stdev=8000.54, samples=11 00:10:39.507 iops : min= 1772, max= 9740, avg=6374.91, stdev=2000.13, samples=11 00:10:39.507 write: IOPS=6999, BW=27.3MiB/s (28.7MB/s)(149MiB/5448msec); 0 zone resets 00:10:39.507 slat (usec): min=4, max=4483, avg=54.70, stdev=137.12 00:10:39.507 clat (usec): min=1266, max=23231, avg=6398.78, stdev=2020.90 00:10:39.507 lat (usec): min=1303, max=23264, avg=6453.48, stdev=2037.53 00:10:39.507 clat percentiles (usec): 00:10:39.507 | 1.00th=[ 2540], 5.00th=[ 3195], 10.00th=[ 3621], 20.00th=[ 4359], 00:10:39.507 | 30.00th=[ 5276], 40.00th=[ 6456], 50.00th=[ 6849], 60.00th=[ 7177], 00:10:39.507 | 70.00th=[ 7373], 80.00th=[ 7701], 90.00th=[ 8160], 95.00th=[ 8848], 00:10:39.507 | 99.00th=[11731], 99.50th=[14353], 99.90th=[20579], 99.95th=[21627], 00:10:39.507 | 99.99th=[22938] 00:10:39.507 bw ( KiB/s): min= 7472, max=38264, per=90.98%, avg=25472.00, stdev=7881.86, samples=11 00:10:39.507 iops : min= 1868, max= 9566, avg=6368.00, stdev=1970.46, samples=11 00:10:39.507 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:10:39.507 lat (msec) : 2=0.14%, 4=8.06%, 10=86.29%, 20=5.27%, 50=0.19% 00:10:39.507 cpu : usr=6.07%, sys=24.61%, ctx=6012, majf=0, minf=114 00:10:39.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:39.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.508 issued rwts: total=70272,38134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.508 00:10:39.508 Run status group 0 (all jobs): 00:10:39.508 READ: bw=45.7MiB/s (48.0MB/s), 45.7MiB/s-45.7MiB/s (48.0MB/s-48.0MB/s), io=275MiB (288MB), run=6002-6002msec 00:10:39.508 WRITE: bw=27.3MiB/s (28.7MB/s), 27.3MiB/s-27.3MiB/s (28.7MB/s-28.7MB/s), io=149MiB (156MB), run=5448-5448msec 00:10:39.508 00:10:39.508 Disk stats (read/write): 00:10:39.508 nvme0n1: ios=69508/37490, merge=0/0, ticks=485933/222054, in_queue=707987, util=98.60% 00:10:39.508 06:35:18 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:39.508 06:35:18 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.508 06:35:18 -- common/autotest_common.sh@1198 -- # local i=0 00:10:39.508 06:35:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:39.508 06:35:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.508 06:35:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:39.508 06:35:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.508 06:35:18 -- common/autotest_common.sh@1210 -- # return 0 00:10:39.508 06:35:18 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.508 06:35:18 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:39.508 06:35:18 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:39.508 06:35:18 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:39.508 06:35:18 -- target/multipath.sh@144 -- # nvmftestfini 00:10:39.508 06:35:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:39.508 06:35:18 -- nvmf/common.sh@116 -- # sync 00:10:39.508 06:35:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:39.508 06:35:18 -- nvmf/common.sh@119 -- # set +e 00:10:39.508 06:35:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:39.508 06:35:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:39.508 rmmod nvme_tcp 00:10:39.508 rmmod nvme_fabrics 00:10:39.508 rmmod nvme_keyring 00:10:39.508 06:35:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:39.508 06:35:18 -- nvmf/common.sh@123 -- # set -e 00:10:39.508 06:35:18 -- nvmf/common.sh@124 -- # return 0 00:10:39.508 06:35:18 -- nvmf/common.sh@477 -- # '[' -n 73927 ']' 00:10:39.508 06:35:18 -- nvmf/common.sh@478 -- # killprocess 73927 00:10:39.508 06:35:18 -- common/autotest_common.sh@926 -- # '[' -z 73927 ']' 00:10:39.508 06:35:18 -- common/autotest_common.sh@930 -- # kill -0 73927 00:10:39.508 06:35:18 -- common/autotest_common.sh@931 -- # uname 00:10:39.508 06:35:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:39.508 06:35:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73927 00:10:39.508 killing process with pid 73927 00:10:39.508 06:35:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:39.508 06:35:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:39.508 06:35:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73927' 00:10:39.508 06:35:18 -- common/autotest_common.sh@945 -- # kill 73927 00:10:39.508 06:35:18 -- common/autotest_common.sh@950 -- # wait 73927 00:10:39.508 06:35:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:39.508 06:35:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:39.508 06:35:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:39.508 06:35:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.508 06:35:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:39.508 06:35:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.508 06:35:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.508 06:35:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.508 06:35:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:39.508 00:10:39.508 real 0m19.451s 00:10:39.508 user 1m13.666s 00:10:39.508 sys 0m9.477s 00:10:39.508 06:35:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.508 06:35:19 -- common/autotest_common.sh@10 -- # set +x 00:10:39.508 ************************************ 00:10:39.508 END TEST nvmf_multipath 00:10:39.508 ************************************ 00:10:39.508 06:35:19 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:39.508 06:35:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:39.508 06:35:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:39.508 06:35:19 -- common/autotest_common.sh@10 -- # set +x 00:10:39.508 ************************************ 00:10:39.508 START TEST nvmf_zcopy 00:10:39.508 ************************************ 00:10:39.508 06:35:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:39.508 * Looking for test storage... 00:10:39.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:39.508 06:35:19 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:39.508 06:35:19 -- nvmf/common.sh@7 -- # uname -s 00:10:39.508 06:35:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.508 06:35:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.508 06:35:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.508 06:35:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.508 06:35:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.508 06:35:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.508 06:35:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.508 06:35:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.508 06:35:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.508 06:35:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.508 06:35:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:10:39.508 06:35:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:10:39.508 06:35:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.508 06:35:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.508 06:35:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:39.508 06:35:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:39.508 06:35:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.508 06:35:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.508 06:35:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.508 06:35:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.508 06:35:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.508 06:35:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.508 06:35:19 -- paths/export.sh@5 -- # export PATH 00:10:39.508 06:35:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.508 06:35:19 -- nvmf/common.sh@46 -- # : 0 00:10:39.508 06:35:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:39.508 06:35:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:39.508 06:35:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:39.509 06:35:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.509 06:35:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.509 06:35:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:39.509 06:35:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:39.509 06:35:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:39.509 06:35:19 -- target/zcopy.sh@12 -- # nvmftestinit 00:10:39.509 06:35:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:39.509 06:35:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.509 06:35:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:39.509 06:35:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:39.509 06:35:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:39.509 06:35:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.509 06:35:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.509 06:35:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.509 06:35:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:39.509 06:35:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:39.509 06:35:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:39.509 06:35:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:39.509 06:35:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:39.509 06:35:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:39.509 06:35:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.509 06:35:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.509 06:35:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:39.509 06:35:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:39.509 06:35:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:39.509 06:35:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:39.509 06:35:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:39.509 06:35:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.509 06:35:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:39.509 06:35:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:39.509 06:35:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:39.509 06:35:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:39.509 06:35:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:39.509 06:35:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:39.509 Cannot find device "nvmf_tgt_br" 00:10:39.509 06:35:19 -- nvmf/common.sh@154 -- # true 00:10:39.509 06:35:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:39.509 Cannot find device "nvmf_tgt_br2" 00:10:39.509 06:35:19 -- nvmf/common.sh@155 -- # true 00:10:39.509 06:35:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:39.509 06:35:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:39.509 Cannot find device "nvmf_tgt_br" 00:10:39.509 06:35:19 -- nvmf/common.sh@157 -- # true 00:10:39.509 06:35:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:39.509 Cannot find device "nvmf_tgt_br2" 00:10:39.509 06:35:19 -- nvmf/common.sh@158 -- # true 00:10:39.509 06:35:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:39.509 06:35:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:39.509 06:35:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:39.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:39.509 06:35:19 -- nvmf/common.sh@161 -- # true 00:10:39.509 06:35:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:39.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:39.509 06:35:19 -- nvmf/common.sh@162 -- # true 00:10:39.509 06:35:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:39.509 06:35:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:39.509 06:35:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:39.509 06:35:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:39.509 06:35:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:39.509 06:35:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:39.767 06:35:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:39.767 06:35:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:39.767 06:35:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:39.767 06:35:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:39.767 06:35:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:39.767 06:35:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:39.767 06:35:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:39.767 06:35:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:39.767 06:35:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:39.767 06:35:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:39.767 06:35:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:39.767 06:35:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:39.767 06:35:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:39.767 06:35:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:39.767 06:35:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:39.767 06:35:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:39.767 06:35:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:39.767 06:35:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:39.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:10:39.767 00:10:39.767 --- 10.0.0.2 ping statistics --- 00:10:39.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.767 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:39.767 06:35:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:39.767 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:39.767 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:10:39.767 00:10:39.767 --- 10.0.0.3 ping statistics --- 00:10:39.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.767 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:39.767 06:35:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:39.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:39.767 00:10:39.767 --- 10.0.0.1 ping statistics --- 00:10:39.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.767 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:39.767 06:35:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.767 06:35:19 -- nvmf/common.sh@421 -- # return 0 00:10:39.768 06:35:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:39.768 06:35:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.768 06:35:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:39.768 06:35:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:39.768 06:35:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.768 06:35:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:39.768 06:35:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:39.768 06:35:19 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:39.768 06:35:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:39.768 06:35:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:39.768 06:35:19 -- common/autotest_common.sh@10 -- # set +x 00:10:39.768 06:35:19 -- nvmf/common.sh@469 -- # nvmfpid=74391 00:10:39.768 06:35:19 -- nvmf/common.sh@470 -- # waitforlisten 74391 00:10:39.768 06:35:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:39.768 06:35:19 -- common/autotest_common.sh@819 -- # '[' -z 74391 ']' 00:10:39.768 06:35:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.768 06:35:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:39.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.768 06:35:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.768 06:35:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:39.768 06:35:19 -- common/autotest_common.sh@10 -- # set +x 00:10:39.768 [2024-07-12 06:35:19.657727] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:39.768 [2024-07-12 06:35:19.657826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.026 [2024-07-12 06:35:19.797464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.026 [2024-07-12 06:35:19.836989] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:40.026 [2024-07-12 06:35:19.837161] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.026 [2024-07-12 06:35:19.837176] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.026 [2024-07-12 06:35:19.837186] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.026 [2024-07-12 06:35:19.837217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.959 06:35:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:40.959 06:35:20 -- common/autotest_common.sh@852 -- # return 0 00:10:40.959 06:35:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:40.959 06:35:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:40.959 06:35:20 -- common/autotest_common.sh@10 -- # set +x 00:10:40.959 06:35:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.959 06:35:20 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:40.959 06:35:20 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:40.959 06:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:40.959 06:35:20 -- common/autotest_common.sh@10 -- # set +x 00:10:40.959 [2024-07-12 06:35:20.674759] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.959 06:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:40.959 06:35:20 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:40.959 06:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:40.959 06:35:20 -- common/autotest_common.sh@10 -- # set +x 00:10:40.959 06:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:40.959 06:35:20 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.959 06:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:40.959 06:35:20 -- common/autotest_common.sh@10 -- # set +x 00:10:40.959 [2024-07-12 06:35:20.694693] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.959 06:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:40.959 06:35:20 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.959 06:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:40.959 06:35:20 -- common/autotest_common.sh@10 -- # set +x 00:10:40.959 06:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:40.959 06:35:20 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:40.959 06:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:40.959 06:35:20 -- common/autotest_common.sh@10 -- # set +x 00:10:40.959 malloc0 00:10:40.959 06:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:40.959 06:35:20 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:40.959 06:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:40.959 06:35:20 -- common/autotest_common.sh@10 -- # set +x 00:10:40.959 06:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:40.959 06:35:20 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:40.959 06:35:20 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:40.959 06:35:20 -- nvmf/common.sh@520 -- # config=() 00:10:40.959 06:35:20 -- nvmf/common.sh@520 -- # local subsystem config 00:10:40.959 06:35:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:40.959 06:35:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:40.959 { 00:10:40.959 "params": { 00:10:40.959 "name": "Nvme$subsystem", 00:10:40.959 "trtype": "$TEST_TRANSPORT", 00:10:40.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:40.959 "adrfam": "ipv4", 00:10:40.959 "trsvcid": "$NVMF_PORT", 00:10:40.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:40.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:40.959 "hdgst": ${hdgst:-false}, 00:10:40.959 "ddgst": ${ddgst:-false} 00:10:40.959 }, 00:10:40.959 "method": "bdev_nvme_attach_controller" 00:10:40.959 } 00:10:40.959 EOF 00:10:40.959 )") 00:10:40.959 06:35:20 -- nvmf/common.sh@542 -- # cat 00:10:40.959 06:35:20 -- nvmf/common.sh@544 -- # jq . 00:10:40.959 06:35:20 -- nvmf/common.sh@545 -- # IFS=, 00:10:40.959 06:35:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:40.959 "params": { 00:10:40.959 "name": "Nvme1", 00:10:40.959 "trtype": "tcp", 00:10:40.959 "traddr": "10.0.0.2", 00:10:40.959 "adrfam": "ipv4", 00:10:40.959 "trsvcid": "4420", 00:10:40.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:40.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:40.959 "hdgst": false, 00:10:40.959 "ddgst": false 00:10:40.959 }, 00:10:40.959 "method": "bdev_nvme_attach_controller" 00:10:40.959 }' 00:10:40.959 [2024-07-12 06:35:20.777848] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:40.959 [2024-07-12 06:35:20.777972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74424 ] 00:10:41.217 [2024-07-12 06:35:20.918468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.217 [2024-07-12 06:35:20.958070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.217 Running I/O for 10 seconds... 00:10:53.418 00:10:53.418 Latency(us) 00:10:53.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.418 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:53.418 Verification LBA range: start 0x0 length 0x1000 00:10:53.418 Nvme1n1 : 10.01 8691.62 67.90 0.00 0.00 14687.95 1139.43 28240.06 00:10:53.418 =================================================================================================================== 00:10:53.418 Total : 8691.62 67.90 0.00 0.00 14687.95 1139.43 28240.06 00:10:53.418 06:35:31 -- target/zcopy.sh@39 -- # perfpid=74540 00:10:53.418 06:35:31 -- target/zcopy.sh@41 -- # xtrace_disable 00:10:53.418 06:35:31 -- common/autotest_common.sh@10 -- # set +x 00:10:53.418 06:35:31 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:53.418 06:35:31 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:53.418 06:35:31 -- nvmf/common.sh@520 -- # config=() 00:10:53.418 06:35:31 -- nvmf/common.sh@520 -- # local subsystem config 00:10:53.418 06:35:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:53.418 06:35:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:53.418 { 00:10:53.418 "params": { 00:10:53.418 "name": "Nvme$subsystem", 00:10:53.418 "trtype": "$TEST_TRANSPORT", 00:10:53.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.418 "adrfam": "ipv4", 00:10:53.418 "trsvcid": "$NVMF_PORT", 00:10:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.418 "hdgst": ${hdgst:-false}, 00:10:53.418 "ddgst": ${ddgst:-false} 00:10:53.418 }, 00:10:53.418 "method": "bdev_nvme_attach_controller" 00:10:53.418 } 00:10:53.418 EOF 00:10:53.418 )") 00:10:53.418 06:35:31 -- nvmf/common.sh@542 -- # cat 00:10:53.418 [2024-07-12 06:35:31.253330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.418 [2024-07-12 06:35:31.253380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.418 06:35:31 -- nvmf/common.sh@544 -- # jq . 00:10:53.418 06:35:31 -- nvmf/common.sh@545 -- # IFS=, 00:10:53.418 06:35:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:53.418 "params": { 00:10:53.418 "name": "Nvme1", 00:10:53.418 "trtype": "tcp", 00:10:53.418 "traddr": "10.0.0.2", 00:10:53.418 "adrfam": "ipv4", 00:10:53.418 "trsvcid": "4420", 00:10:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.418 "hdgst": false, 00:10:53.418 "ddgst": false 00:10:53.418 }, 00:10:53.418 "method": "bdev_nvme_attach_controller" 00:10:53.418 }' 00:10:53.418 [2024-07-12 06:35:31.265330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.418 [2024-07-12 06:35:31.265398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.418 [2024-07-12 06:35:31.277330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.418 [2024-07-12 06:35:31.277403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.418 [2024-07-12 06:35:31.289312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.418 [2024-07-12 06:35:31.289371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.418 [2024-07-12 06:35:31.300930] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:53.418 [2024-07-12 06:35:31.301066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74540 ] 00:10:53.418 [2024-07-12 06:35:31.301309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.418 [2024-07-12 06:35:31.301344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.418 [2024-07-12 06:35:31.313348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.418 [2024-07-12 06:35:31.313643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.418 [2024-07-12 06:35:31.325367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.418 [2024-07-12 06:35:31.325615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.418 [2024-07-12 06:35:31.337371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.418 [2024-07-12 06:35:31.337673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.418 [2024-07-12 06:35:31.349360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.349642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.361363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.361634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.373336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.373610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.385318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.385454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.397351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.397489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.409354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.409389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.421318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.421346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.433356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.433391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.439489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.419 [2024-07-12 06:35:31.445351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.445391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.457382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.457424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.469388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.469428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.473893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.419 [2024-07-12 06:35:31.481365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.481392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.493399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.493444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.505400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.505441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.517408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.517454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.529398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.529450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.541420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.541456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.553431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.553476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.565427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.565462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.577449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.577481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.589463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.589497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.601457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.601491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 Running I/O for 5 seconds... 00:10:53.419 [2024-07-12 06:35:31.618406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.618450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.636763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.636805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.651574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.651618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.668262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.668312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.685207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.685248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.701362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.701413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.717792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.717834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.734419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.734473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.751313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.751361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.767314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.767360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.786096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.786150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.800792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.800834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.819485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.819542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.833852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.833894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.849945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.850007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.866750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.866790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.882739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.882786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.899648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.899713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.916863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.916928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.933322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.933403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.950026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.950105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.967355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.967423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.982060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.982126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:31.996991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:31.997062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:32.014269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:32.014315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:32.031434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:32.031486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:32.047071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:32.047129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:32.065921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:32.066031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:32.080359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:32.080410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:32.098163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:32.098222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:32.112760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:32.112808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:32.128709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:32.128765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:32.145771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:32.145836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:32.163442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:32.163491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.419 [2024-07-12 06:35:32.178628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.419 [2024-07-12 06:35:32.178668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.197808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.197856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.212797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.212845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.230225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.230295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.245091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.245139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.260909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.260949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.278676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.278728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.293999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.294038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.303361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.303405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.320196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.320231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.336453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.336505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.353654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.353692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.369395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.369441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.388799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.388845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.403872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.403910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.422683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.422734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.437763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.437814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.455535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.455591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.471463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.471511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.487698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.487745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.504871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.504913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.521982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.522078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.538656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.538710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.554543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.554585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.564547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.564585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.579656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.579700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.596595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.596657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.613459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.613505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.628884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.628935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.638058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.638094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.654328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.654379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.671864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.671906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.687632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.687686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.706110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.706153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.720499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.720561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.736751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.736798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.752730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.752802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.762128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.762170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.777828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.777877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.796316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.796354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.810578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.810654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.826893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.826984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.843031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.843083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.862121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.862158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.876671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.876714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.895120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.895163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.909902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.909986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.926547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.926596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.944298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.944335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.960537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.960581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.976881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.976918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:32.994221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:32.994281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:33.009284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:33.009323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:33.019052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:33.019098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:33.035331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:33.035377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:33.052669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:33.052716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.420 [2024-07-12 06:35:33.070430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.420 [2024-07-12 06:35:33.070470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.085390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.085434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.094794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.094833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.110553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.110607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.126464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.126507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.143822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.143867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.159798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.159839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.176800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.176846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.192861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.192912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.210275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.210334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.226338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.226406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.242652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.242695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.258794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.258849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.277854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.277900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.292949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.293009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.312814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.312872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.421 [2024-07-12 06:35:33.327603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.421 [2024-07-12 06:35:33.327677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.346295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.346354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.360693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.360731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.377619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.377672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.394192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.394233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.411170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.411222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.427579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.427621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.444175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.444219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.461679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.461719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.476457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.476503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.492377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.492435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.509316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.509375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.525436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.525505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.540801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.540842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.549899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.549936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.565756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.565799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.679 [2024-07-12 06:35:33.581806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.679 [2024-07-12 06:35:33.581844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.598504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.938 [2024-07-12 06:35:33.598557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.615016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.938 [2024-07-12 06:35:33.615062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.633424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.938 [2024-07-12 06:35:33.633473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.647855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.938 [2024-07-12 06:35:33.647903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.665429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.938 [2024-07-12 06:35:33.665465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.679792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.938 [2024-07-12 06:35:33.679858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.696429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.938 [2024-07-12 06:35:33.696465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.711698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.938 [2024-07-12 06:35:33.711746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.721342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.938 [2024-07-12 06:35:33.721381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.738172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.938 [2024-07-12 06:35:33.738230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.755787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.938 [2024-07-12 06:35:33.755848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.938 [2024-07-12 06:35:33.770847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.939 [2024-07-12 06:35:33.770898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.939 [2024-07-12 06:35:33.780464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.939 [2024-07-12 06:35:33.780502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.939 [2024-07-12 06:35:33.797419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.939 [2024-07-12 06:35:33.797470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.939 [2024-07-12 06:35:33.814366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.939 [2024-07-12 06:35:33.814424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.939 [2024-07-12 06:35:33.830900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.939 [2024-07-12 06:35:33.831022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.939 [2024-07-12 06:35:33.847708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.939 [2024-07-12 06:35:33.847762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:33.863871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:33.863933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:33.882478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:33.882529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:33.897075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:33.897140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:33.906521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:33.906574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:33.922587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:33.922668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:33.938991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:33.939055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:33.956089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:33.956158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:33.972938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:33.972997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:33.992108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:33.992173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:34.007758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:34.007835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:34.024011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:34.024071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:34.043091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:34.043156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:34.057886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:34.057992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:34.075268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:34.075330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:34.091621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:34.091685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.198 [2024-07-12 06:35:34.107870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.198 [2024-07-12 06:35:34.107933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.125150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.125207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.140960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.141052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.158388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.158454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.173288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.173381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.190580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.190673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.205050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.205111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.220875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.220934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.238810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.238875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.253167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.253226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.269121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.269181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.285422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.285482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.303653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.303710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.318210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.318271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.335617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.335687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.350127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.350196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.457 [2024-07-12 06:35:34.366347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.457 [2024-07-12 06:35:34.366427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.382707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.382771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.401157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.401235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.415718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.415781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.432645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.432712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.447059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.447110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.463955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.464024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.479959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.480048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.497524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.497564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.512460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.512510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.524471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.524508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.541729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.541800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.557388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.557451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.574815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.574874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.589260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.589304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.605117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.605163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.716 [2024-07-12 06:35:34.622266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.716 [2024-07-12 06:35:34.622309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.975 [2024-07-12 06:35:34.640199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.975 [2024-07-12 06:35:34.640255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.975 [2024-07-12 06:35:34.655001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.975 [2024-07-12 06:35:34.655041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.664583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.664645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.679479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.679517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.695694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.695741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.712494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.712532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.728797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.728843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.747156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.747211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.761902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.761990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.779196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.779248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.796367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.796412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.812950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.813027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.829785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.829828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.845421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.845489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.857023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.857059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.874132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.874182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.976 [2024-07-12 06:35:34.889502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.976 [2024-07-12 06:35:34.889541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.234 [2024-07-12 06:35:34.898776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.234 [2024-07-12 06:35:34.898816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:34.914630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:34.914673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:34.924219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:34.924256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:34.940050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:34.940095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:34.958930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:34.958994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:34.973515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:34.973556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:34.989344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:34.989400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:35.005909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:35.005967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:35.023019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:35.023076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:35.039484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:35.039532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:35.056279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:35.056331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:35.072374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:35.072417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:35.090803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:35.090850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:35.104845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:35.104883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:35.120520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:35.120581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.235 [2024-07-12 06:35:35.137245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.235 [2024-07-12 06:35:35.137300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.153733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.153779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.171359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.171426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.185542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.185596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.201671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.201739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.218132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.218169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.234588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.234648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.251794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.251850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.269010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.269055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.283617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.283671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.300764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.300825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.314884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.314922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.331062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.331099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.348248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.348288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.364372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.364411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.382983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.383030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.493 [2024-07-12 06:35:35.397398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.493 [2024-07-12 06:35:35.397438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.751 [2024-07-12 06:35:35.413206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.751 [2024-07-12 06:35:35.413257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.751 [2024-07-12 06:35:35.430061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.430101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.446897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.446949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.464003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.464042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.481223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.481268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.495950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.496018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.505515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.505557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.521504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.521543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.540834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.540881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.555088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.555125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.572279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.572325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.588439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.588484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.604777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.604822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.623446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.623490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.638517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.638564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.752 [2024-07-12 06:35:35.657111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.752 [2024-07-12 06:35:35.657172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.671726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.671775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.686898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.686967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.697008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.697051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.712924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.712984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.728797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.728848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.747257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.747309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.761882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.761933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.779356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.779413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.794238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.794289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.811266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.811315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.826033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.826080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.842888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.842941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.857693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.857740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.867637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.867678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.882513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.882562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.901669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.901713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.916240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.916288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.010 [2024-07-12 06:35:35.926055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.010 [2024-07-12 06:35:35.926096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.269 [2024-07-12 06:35:35.942357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.269 [2024-07-12 06:35:35.942414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.269 [2024-07-12 06:35:35.959598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.269 [2024-07-12 06:35:35.959645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.269 [2024-07-12 06:35:35.975915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.269 [2024-07-12 06:35:35.975981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.269 [2024-07-12 06:35:35.993133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.269 [2024-07-12 06:35:35.993178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.269 [2024-07-12 06:35:36.007836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.269 [2024-07-12 06:35:36.007893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.269 [2024-07-12 06:35:36.017768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.269 [2024-07-12 06:35:36.017807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.269 [2024-07-12 06:35:36.033607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.269 [2024-07-12 06:35:36.033661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.269 [2024-07-12 06:35:36.051445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.270 [2024-07-12 06:35:36.051495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.270 [2024-07-12 06:35:36.066821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.270 [2024-07-12 06:35:36.066876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.270 [2024-07-12 06:35:36.084420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.270 [2024-07-12 06:35:36.084469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.270 [2024-07-12 06:35:36.099133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.270 [2024-07-12 06:35:36.099190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.270 [2024-07-12 06:35:36.115366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.270 [2024-07-12 06:35:36.115414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.270 [2024-07-12 06:35:36.132824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.270 [2024-07-12 06:35:36.132884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.270 [2024-07-12 06:35:36.147873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.270 [2024-07-12 06:35:36.147924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.270 [2024-07-12 06:35:36.156644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.270 [2024-07-12 06:35:36.156692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.270 [2024-07-12 06:35:36.172635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.270 [2024-07-12 06:35:36.172687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.270 [2024-07-12 06:35:36.181719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.270 [2024-07-12 06:35:36.181760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.197669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.197721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.207502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.207547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.222513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.222572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.241824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.241881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.256851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.256913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.274851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.274902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.290195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.290251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.308026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.308077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.322976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.323035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.333296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.333340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.348438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.348498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.364798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.364848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.382277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.382343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.398481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.398532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.417279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.417339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.529 [2024-07-12 06:35:36.431992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.529 [2024-07-12 06:35:36.432035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 [2024-07-12 06:35:36.447930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.788 [2024-07-12 06:35:36.447990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 [2024-07-12 06:35:36.465108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.788 [2024-07-12 06:35:36.465152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 [2024-07-12 06:35:36.481179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.788 [2024-07-12 06:35:36.481223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 [2024-07-12 06:35:36.498630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.788 [2024-07-12 06:35:36.498676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 [2024-07-12 06:35:36.514271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.788 [2024-07-12 06:35:36.514323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 [2024-07-12 06:35:36.533481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.788 [2024-07-12 06:35:36.533546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 [2024-07-12 06:35:36.548450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.788 [2024-07-12 06:35:36.548503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 [2024-07-12 06:35:36.567028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.788 [2024-07-12 06:35:36.567089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 [2024-07-12 06:35:36.582072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.788 [2024-07-12 06:35:36.582122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 [2024-07-12 06:35:36.600516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.788 [2024-07-12 06:35:36.600577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 [2024-07-12 06:35:36.611398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.788 [2024-07-12 06:35:36.611440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.788 00:10:56.788 Latency(us) 00:10:56.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.788 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:56.788 Nvme1n1 : 5.01 11839.25 92.49 0.00 0.00 10799.92 4230.05 21686.46 00:10:56.788 =================================================================================================================== 00:10:56.789 Total : 11839.25 92.49 0.00 0.00 10799.92 4230.05 21686.46 00:10:56.789 [2024-07-12 06:35:36.623386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.789 [2024-07-12 06:35:36.623434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.789 [2024-07-12 06:35:36.635405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.789 [2024-07-12 06:35:36.635451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.789 [2024-07-12 06:35:36.647408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.789 [2024-07-12 06:35:36.647451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.789 [2024-07-12 06:35:36.659418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.789 [2024-07-12 06:35:36.659478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.789 [2024-07-12 06:35:36.671407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.789 [2024-07-12 06:35:36.671464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.789 [2024-07-12 06:35:36.683424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.789 [2024-07-12 06:35:36.683478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.789 [2024-07-12 06:35:36.695431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.789 [2024-07-12 06:35:36.695470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.047 [2024-07-12 06:35:36.707434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.047 [2024-07-12 06:35:36.707474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.047 [2024-07-12 06:35:36.719452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.047 [2024-07-12 06:35:36.719522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.047 [2024-07-12 06:35:36.731444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.047 [2024-07-12 06:35:36.731496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.047 [2024-07-12 06:35:36.743435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.047 [2024-07-12 06:35:36.743465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.047 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74540) - No such process 00:10:57.047 06:35:36 -- target/zcopy.sh@49 -- # wait 74540 00:10:57.047 06:35:36 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.047 06:35:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:57.047 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:57.047 06:35:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:57.047 06:35:36 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:57.047 06:35:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:57.047 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:57.047 delay0 00:10:57.047 06:35:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:57.047 06:35:36 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:57.047 06:35:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:57.047 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:57.047 06:35:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:57.047 06:35:36 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:57.306 [2024-07-12 06:35:36.978403] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:03.865 Initializing NVMe Controllers 00:11:03.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:03.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:03.865 Initialization complete. Launching workers. 00:11:03.865 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 848 00:11:03.865 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1135, failed to submit 33 00:11:03.865 success 1031, unsuccess 104, failed 0 00:11:03.865 06:35:43 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:03.865 06:35:43 -- target/zcopy.sh@60 -- # nvmftestfini 00:11:03.865 06:35:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:03.865 06:35:43 -- nvmf/common.sh@116 -- # sync 00:11:03.865 06:35:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:03.865 06:35:43 -- nvmf/common.sh@119 -- # set +e 00:11:03.865 06:35:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:03.865 06:35:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:03.865 rmmod nvme_tcp 00:11:03.865 rmmod nvme_fabrics 00:11:03.865 rmmod nvme_keyring 00:11:03.865 06:35:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:03.865 06:35:43 -- nvmf/common.sh@123 -- # set -e 00:11:03.865 06:35:43 -- nvmf/common.sh@124 -- # return 0 00:11:03.865 06:35:43 -- nvmf/common.sh@477 -- # '[' -n 74391 ']' 00:11:03.865 06:35:43 -- nvmf/common.sh@478 -- # killprocess 74391 00:11:03.865 06:35:43 -- common/autotest_common.sh@926 -- # '[' -z 74391 ']' 00:11:03.865 06:35:43 -- common/autotest_common.sh@930 -- # kill -0 74391 00:11:03.865 06:35:43 -- common/autotest_common.sh@931 -- # uname 00:11:03.865 06:35:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:03.865 06:35:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74391 00:11:03.865 06:35:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:03.865 killing process with pid 74391 00:11:03.865 06:35:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:03.865 06:35:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74391' 00:11:03.865 06:35:43 -- common/autotest_common.sh@945 -- # kill 74391 00:11:03.865 06:35:43 -- common/autotest_common.sh@950 -- # wait 74391 00:11:03.865 06:35:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:03.866 06:35:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:03.866 06:35:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:03.866 06:35:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.866 06:35:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:03.866 06:35:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.866 06:35:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.866 06:35:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.866 06:35:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:03.866 00:11:03.866 real 0m24.347s 00:11:03.866 user 0m40.065s 00:11:03.866 sys 0m6.538s 00:11:03.866 06:35:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.866 06:35:43 -- common/autotest_common.sh@10 -- # set +x 00:11:03.866 ************************************ 00:11:03.866 END TEST nvmf_zcopy 00:11:03.866 ************************************ 00:11:03.866 06:35:43 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:03.866 06:35:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:03.866 06:35:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:03.866 06:35:43 -- common/autotest_common.sh@10 -- # set +x 00:11:03.866 ************************************ 00:11:03.866 START TEST nvmf_nmic 00:11:03.866 ************************************ 00:11:03.866 06:35:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:03.866 * Looking for test storage... 00:11:03.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:03.866 06:35:43 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.866 06:35:43 -- nvmf/common.sh@7 -- # uname -s 00:11:03.866 06:35:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.866 06:35:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.866 06:35:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.866 06:35:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.866 06:35:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.866 06:35:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.866 06:35:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.866 06:35:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.866 06:35:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.866 06:35:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.866 06:35:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:11:03.866 06:35:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:11:03.866 06:35:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.866 06:35:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.866 06:35:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.866 06:35:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.866 06:35:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.866 06:35:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.866 06:35:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.866 06:35:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.866 06:35:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.866 06:35:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.866 06:35:43 -- paths/export.sh@5 -- # export PATH 00:11:03.866 06:35:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.866 06:35:43 -- nvmf/common.sh@46 -- # : 0 00:11:03.866 06:35:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:03.866 06:35:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:03.866 06:35:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:03.866 06:35:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.866 06:35:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.866 06:35:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:03.866 06:35:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:03.866 06:35:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:03.866 06:35:43 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.866 06:35:43 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.866 06:35:43 -- target/nmic.sh@14 -- # nvmftestinit 00:11:03.866 06:35:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:03.866 06:35:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.866 06:35:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:03.866 06:35:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:03.866 06:35:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:03.866 06:35:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.866 06:35:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.866 06:35:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.866 06:35:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:03.866 06:35:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:03.866 06:35:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:03.866 06:35:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:03.866 06:35:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:03.866 06:35:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:03.866 06:35:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.866 06:35:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.866 06:35:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:03.866 06:35:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:03.866 06:35:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.866 06:35:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.866 06:35:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.866 06:35:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.866 06:35:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.866 06:35:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.866 06:35:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.866 06:35:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.866 06:35:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:03.866 06:35:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:03.866 Cannot find device "nvmf_tgt_br" 00:11:03.866 06:35:43 -- nvmf/common.sh@154 -- # true 00:11:03.866 06:35:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.866 Cannot find device "nvmf_tgt_br2" 00:11:03.866 06:35:43 -- nvmf/common.sh@155 -- # true 00:11:03.866 06:35:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:03.866 06:35:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:03.866 Cannot find device "nvmf_tgt_br" 00:11:03.866 06:35:43 -- nvmf/common.sh@157 -- # true 00:11:03.866 06:35:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:03.866 Cannot find device "nvmf_tgt_br2" 00:11:03.866 06:35:43 -- nvmf/common.sh@158 -- # true 00:11:03.866 06:35:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:03.866 06:35:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:03.866 06:35:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.866 06:35:43 -- nvmf/common.sh@161 -- # true 00:11:03.866 06:35:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.866 06:35:43 -- nvmf/common.sh@162 -- # true 00:11:03.866 06:35:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:03.866 06:35:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:04.124 06:35:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:04.124 06:35:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:04.124 06:35:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:04.124 06:35:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:04.124 06:35:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:04.124 06:35:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:04.124 06:35:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:04.124 06:35:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:04.124 06:35:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:04.124 06:35:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:04.124 06:35:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:04.124 06:35:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:04.124 06:35:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:04.124 06:35:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:04.124 06:35:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:04.124 06:35:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:04.124 06:35:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:04.124 06:35:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:04.124 06:35:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:04.124 06:35:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:04.124 06:35:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:04.124 06:35:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:04.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:11:04.124 00:11:04.124 --- 10.0.0.2 ping statistics --- 00:11:04.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.124 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:04.124 06:35:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:04.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:04.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:11:04.124 00:11:04.124 --- 10.0.0.3 ping statistics --- 00:11:04.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.124 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:04.124 06:35:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:04.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:04.124 00:11:04.124 --- 10.0.0.1 ping statistics --- 00:11:04.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.124 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:04.124 06:35:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.124 06:35:43 -- nvmf/common.sh@421 -- # return 0 00:11:04.124 06:35:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:04.124 06:35:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.124 06:35:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:04.124 06:35:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:04.124 06:35:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.124 06:35:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:04.124 06:35:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:04.124 06:35:43 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:04.124 06:35:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:04.124 06:35:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:04.124 06:35:43 -- common/autotest_common.sh@10 -- # set +x 00:11:04.124 06:35:43 -- nvmf/common.sh@469 -- # nvmfpid=74859 00:11:04.124 06:35:43 -- nvmf/common.sh@470 -- # waitforlisten 74859 00:11:04.124 06:35:43 -- common/autotest_common.sh@819 -- # '[' -z 74859 ']' 00:11:04.124 06:35:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.124 06:35:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.124 06:35:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:04.124 06:35:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.124 06:35:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:04.124 06:35:43 -- common/autotest_common.sh@10 -- # set +x 00:11:04.124 [2024-07-12 06:35:44.016298] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:04.124 [2024-07-12 06:35:44.016403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.381 [2024-07-12 06:35:44.159892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.381 [2024-07-12 06:35:44.202176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:04.381 [2024-07-12 06:35:44.202325] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.381 [2024-07-12 06:35:44.202341] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.381 [2024-07-12 06:35:44.202350] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.381 [2024-07-12 06:35:44.202483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.381 [2024-07-12 06:35:44.202624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.381 [2024-07-12 06:35:44.203193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.381 [2024-07-12 06:35:44.203204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.316 06:35:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:05.316 06:35:44 -- common/autotest_common.sh@852 -- # return 0 00:11:05.316 06:35:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:05.316 06:35:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:05.316 06:35:44 -- common/autotest_common.sh@10 -- # set +x 00:11:05.316 06:35:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.316 06:35:44 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:05.316 06:35:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.316 06:35:44 -- common/autotest_common.sh@10 -- # set +x 00:11:05.316 [2024-07-12 06:35:45.005130] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.316 06:35:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.316 06:35:45 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:05.316 06:35:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.316 06:35:45 -- common/autotest_common.sh@10 -- # set +x 00:11:05.316 Malloc0 00:11:05.316 06:35:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.316 06:35:45 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.316 06:35:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.317 06:35:45 -- common/autotest_common.sh@10 -- # set +x 00:11:05.317 06:35:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.317 06:35:45 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.317 06:35:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.317 06:35:45 -- common/autotest_common.sh@10 -- # set +x 00:11:05.317 06:35:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.317 06:35:45 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.317 06:35:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.317 06:35:45 -- common/autotest_common.sh@10 -- # set +x 00:11:05.317 [2024-07-12 06:35:45.072337] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.317 06:35:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.317 test case1: single bdev can't be used in multiple subsystems 00:11:05.317 06:35:45 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:05.317 06:35:45 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:05.317 06:35:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.317 06:35:45 -- common/autotest_common.sh@10 -- # set +x 00:11:05.317 06:35:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.317 06:35:45 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:05.317 06:35:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.317 06:35:45 -- common/autotest_common.sh@10 -- # set +x 00:11:05.317 06:35:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.317 06:35:45 -- target/nmic.sh@28 -- # nmic_status=0 00:11:05.317 06:35:45 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:05.317 06:35:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.317 06:35:45 -- common/autotest_common.sh@10 -- # set +x 00:11:05.317 [2024-07-12 06:35:45.096165] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:05.317 [2024-07-12 06:35:45.096218] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:05.317 [2024-07-12 06:35:45.096235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.317 request: 00:11:05.317 { 00:11:05.317 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:05.317 "namespace": { 00:11:05.317 "bdev_name": "Malloc0" 00:11:05.317 }, 00:11:05.317 "method": "nvmf_subsystem_add_ns", 00:11:05.317 "req_id": 1 00:11:05.317 } 00:11:05.317 Got JSON-RPC error response 00:11:05.317 response: 00:11:05.317 { 00:11:05.317 "code": -32602, 00:11:05.317 "message": "Invalid parameters" 00:11:05.317 } 00:11:05.317 06:35:45 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:11:05.317 06:35:45 -- target/nmic.sh@29 -- # nmic_status=1 00:11:05.317 06:35:45 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:05.317 Adding namespace failed - expected result. 00:11:05.317 06:35:45 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:05.317 test case2: host connect to nvmf target in multiple paths 00:11:05.317 06:35:45 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:05.317 06:35:45 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:05.317 06:35:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.317 06:35:45 -- common/autotest_common.sh@10 -- # set +x 00:11:05.317 [2024-07-12 06:35:45.108364] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:05.317 06:35:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.317 06:35:45 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.575 06:35:45 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:05.575 06:35:45 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.575 06:35:45 -- common/autotest_common.sh@1177 -- # local i=0 00:11:05.575 06:35:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.575 06:35:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:05.575 06:35:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:07.473 06:35:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:07.473 06:35:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:07.473 06:35:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.473 06:35:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:07.473 06:35:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.731 06:35:47 -- common/autotest_common.sh@1187 -- # return 0 00:11:07.731 06:35:47 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:07.731 [global] 00:11:07.731 thread=1 00:11:07.731 invalidate=1 00:11:07.731 rw=write 00:11:07.731 time_based=1 00:11:07.731 runtime=1 00:11:07.731 ioengine=libaio 00:11:07.731 direct=1 00:11:07.731 bs=4096 00:11:07.731 iodepth=1 00:11:07.731 norandommap=0 00:11:07.731 numjobs=1 00:11:07.731 00:11:07.731 verify_dump=1 00:11:07.731 verify_backlog=512 00:11:07.731 verify_state_save=0 00:11:07.731 do_verify=1 00:11:07.731 verify=crc32c-intel 00:11:07.731 [job0] 00:11:07.731 filename=/dev/nvme0n1 00:11:07.731 Could not set queue depth (nvme0n1) 00:11:07.731 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.731 fio-3.35 00:11:07.731 Starting 1 thread 00:11:09.106 00:11:09.106 job0: (groupid=0, jobs=1): err= 0: pid=74950: Fri Jul 12 06:35:48 2024 00:11:09.106 read: IOPS=3024, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec) 00:11:09.106 slat (nsec): min=12709, max=45113, avg=15511.61, stdev=2934.31 00:11:09.106 clat (usec): min=138, max=462, avg=176.43, stdev=20.48 00:11:09.106 lat (usec): min=152, max=483, avg=191.95, stdev=20.95 00:11:09.106 clat percentiles (usec): 00:11:09.106 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:11:09.106 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:11:09.106 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:11:09.106 | 99.00th=[ 221], 99.50th=[ 231], 99.90th=[ 424], 99.95th=[ 453], 00:11:09.106 | 99.99th=[ 461] 00:11:09.106 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:09.106 slat (usec): min=14, max=150, avg=22.12, stdev= 5.64 00:11:09.106 clat (usec): min=86, max=495, avg=110.58, stdev=16.65 00:11:09.106 lat (usec): min=105, max=519, avg=132.70, stdev=18.76 00:11:09.106 clat percentiles (usec): 00:11:09.106 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 100], 00:11:09.106 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 110], 60.00th=[ 112], 00:11:09.106 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 127], 95.00th=[ 133], 00:11:09.106 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 322], 99.95th=[ 375], 00:11:09.106 | 99.99th=[ 494] 00:11:09.106 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:09.106 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:09.106 lat (usec) : 100=10.59%, 250=89.15%, 500=0.26% 00:11:09.106 cpu : usr=2.00%, sys=9.40%, ctx=6100, majf=0, minf=2 00:11:09.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.106 issued rwts: total=3028,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.106 00:11:09.106 Run status group 0 (all jobs): 00:11:09.106 READ: bw=11.8MiB/s (12.4MB/s), 11.8MiB/s-11.8MiB/s (12.4MB/s-12.4MB/s), io=11.8MiB (12.4MB), run=1001-1001msec 00:11:09.106 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:09.106 00:11:09.106 Disk stats (read/write): 00:11:09.106 nvme0n1: ios=2610/2971, merge=0/0, ticks=502/362, in_queue=864, util=91.48% 00:11:09.106 06:35:48 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:09.106 06:35:48 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.106 06:35:48 -- common/autotest_common.sh@1198 -- # local i=0 00:11:09.106 06:35:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:09.106 06:35:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.106 06:35:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:09.106 06:35:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.106 06:35:48 -- common/autotest_common.sh@1210 -- # return 0 00:11:09.106 06:35:48 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:09.106 06:35:48 -- target/nmic.sh@53 -- # nvmftestfini 00:11:09.106 06:35:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:09.106 06:35:48 -- nvmf/common.sh@116 -- # sync 00:11:09.106 06:35:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:09.106 06:35:48 -- nvmf/common.sh@119 -- # set +e 00:11:09.106 06:35:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:09.106 06:35:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:09.106 rmmod nvme_tcp 00:11:09.106 rmmod nvme_fabrics 00:11:09.106 rmmod nvme_keyring 00:11:09.106 06:35:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:09.106 06:35:48 -- nvmf/common.sh@123 -- # set -e 00:11:09.106 06:35:48 -- nvmf/common.sh@124 -- # return 0 00:11:09.106 06:35:48 -- nvmf/common.sh@477 -- # '[' -n 74859 ']' 00:11:09.106 06:35:48 -- nvmf/common.sh@478 -- # killprocess 74859 00:11:09.106 06:35:48 -- common/autotest_common.sh@926 -- # '[' -z 74859 ']' 00:11:09.106 06:35:48 -- common/autotest_common.sh@930 -- # kill -0 74859 00:11:09.106 06:35:48 -- common/autotest_common.sh@931 -- # uname 00:11:09.106 06:35:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:09.106 06:35:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74859 00:11:09.106 killing process with pid 74859 00:11:09.106 06:35:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:09.106 06:35:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:09.106 06:35:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74859' 00:11:09.106 06:35:48 -- common/autotest_common.sh@945 -- # kill 74859 00:11:09.106 06:35:48 -- common/autotest_common.sh@950 -- # wait 74859 00:11:09.366 06:35:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:09.366 06:35:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:09.366 06:35:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:09.366 06:35:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.366 06:35:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:09.366 06:35:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.366 06:35:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.366 06:35:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.366 06:35:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:09.366 ************************************ 00:11:09.366 END TEST nvmf_nmic 00:11:09.366 ************************************ 00:11:09.366 00:11:09.366 real 0m5.571s 00:11:09.366 user 0m18.172s 00:11:09.366 sys 0m2.167s 00:11:09.366 06:35:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.366 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:11:09.366 06:35:49 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:09.366 06:35:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:09.366 06:35:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:09.366 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:11:09.366 ************************************ 00:11:09.366 START TEST nvmf_fio_target 00:11:09.366 ************************************ 00:11:09.366 06:35:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:09.366 * Looking for test storage... 00:11:09.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:09.366 06:35:49 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:09.366 06:35:49 -- nvmf/common.sh@7 -- # uname -s 00:11:09.366 06:35:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.366 06:35:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.366 06:35:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.366 06:35:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.366 06:35:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.366 06:35:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.366 06:35:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.366 06:35:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.366 06:35:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.366 06:35:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.366 06:35:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:11:09.366 06:35:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:11:09.366 06:35:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.366 06:35:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.366 06:35:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:09.366 06:35:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.366 06:35:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.366 06:35:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.366 06:35:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.366 06:35:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.366 06:35:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.366 06:35:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.366 06:35:49 -- paths/export.sh@5 -- # export PATH 00:11:09.367 06:35:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.367 06:35:49 -- nvmf/common.sh@46 -- # : 0 00:11:09.367 06:35:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:09.367 06:35:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:09.367 06:35:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:09.367 06:35:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.367 06:35:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.367 06:35:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:09.367 06:35:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:09.367 06:35:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:09.367 06:35:49 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.367 06:35:49 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.367 06:35:49 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.367 06:35:49 -- target/fio.sh@16 -- # nvmftestinit 00:11:09.367 06:35:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:09.367 06:35:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.367 06:35:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:09.367 06:35:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:09.367 06:35:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:09.367 06:35:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.367 06:35:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.367 06:35:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.367 06:35:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:09.367 06:35:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:09.367 06:35:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:09.367 06:35:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:09.367 06:35:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:09.367 06:35:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:09.367 06:35:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.367 06:35:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.367 06:35:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:09.367 06:35:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:09.367 06:35:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:09.367 06:35:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:09.367 06:35:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:09.367 06:35:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.367 06:35:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:09.367 06:35:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:09.367 06:35:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:09.367 06:35:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:09.367 06:35:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:09.367 06:35:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:09.624 Cannot find device "nvmf_tgt_br" 00:11:09.624 06:35:49 -- nvmf/common.sh@154 -- # true 00:11:09.624 06:35:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:09.624 Cannot find device "nvmf_tgt_br2" 00:11:09.624 06:35:49 -- nvmf/common.sh@155 -- # true 00:11:09.624 06:35:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:09.624 06:35:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:09.624 Cannot find device "nvmf_tgt_br" 00:11:09.624 06:35:49 -- nvmf/common.sh@157 -- # true 00:11:09.624 06:35:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:09.624 Cannot find device "nvmf_tgt_br2" 00:11:09.624 06:35:49 -- nvmf/common.sh@158 -- # true 00:11:09.624 06:35:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:09.625 06:35:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:09.625 06:35:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.625 06:35:49 -- nvmf/common.sh@161 -- # true 00:11:09.625 06:35:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.625 06:35:49 -- nvmf/common.sh@162 -- # true 00:11:09.625 06:35:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:09.625 06:35:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:09.625 06:35:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:09.625 06:35:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:09.625 06:35:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:09.625 06:35:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:09.625 06:35:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:09.625 06:35:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:09.625 06:35:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:09.625 06:35:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:09.625 06:35:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:09.625 06:35:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:09.625 06:35:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:09.625 06:35:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:09.625 06:35:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:09.625 06:35:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:09.625 06:35:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:09.625 06:35:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:09.625 06:35:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:09.882 06:35:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:09.882 06:35:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:09.882 06:35:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:09.882 06:35:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:09.882 06:35:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:09.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:11:09.882 00:11:09.882 --- 10.0.0.2 ping statistics --- 00:11:09.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.882 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:09.882 06:35:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:09.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:09.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:11:09.882 00:11:09.882 --- 10.0.0.3 ping statistics --- 00:11:09.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.882 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:09.882 06:35:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:09.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:09.882 00:11:09.882 --- 10.0.0.1 ping statistics --- 00:11:09.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.882 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:09.882 06:35:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.882 06:35:49 -- nvmf/common.sh@421 -- # return 0 00:11:09.882 06:35:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:09.882 06:35:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.882 06:35:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:09.882 06:35:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:09.882 06:35:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.882 06:35:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:09.882 06:35:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:09.882 06:35:49 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:09.882 06:35:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:09.882 06:35:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:09.882 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:11:09.882 06:35:49 -- nvmf/common.sh@469 -- # nvmfpid=75128 00:11:09.882 06:35:49 -- nvmf/common.sh@470 -- # waitforlisten 75128 00:11:09.882 06:35:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.882 06:35:49 -- common/autotest_common.sh@819 -- # '[' -z 75128 ']' 00:11:09.882 06:35:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.882 06:35:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:09.882 06:35:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.882 06:35:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:09.882 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:11:09.882 [2024-07-12 06:35:49.680875] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:09.882 [2024-07-12 06:35:49.681024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.139 [2024-07-12 06:35:49.826652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.139 [2024-07-12 06:35:49.861887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:10.139 [2024-07-12 06:35:49.862063] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.139 [2024-07-12 06:35:49.862084] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.139 [2024-07-12 06:35:49.862094] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.139 [2024-07-12 06:35:49.862651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.139 [2024-07-12 06:35:49.862801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.139 [2024-07-12 06:35:49.862886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.139 [2024-07-12 06:35:49.862865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.070 06:35:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:11.070 06:35:50 -- common/autotest_common.sh@852 -- # return 0 00:11:11.070 06:35:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:11.070 06:35:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:11.070 06:35:50 -- common/autotest_common.sh@10 -- # set +x 00:11:11.070 06:35:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.070 06:35:50 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:11.070 [2024-07-12 06:35:50.942032] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.070 06:35:50 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.365 06:35:51 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:11.365 06:35:51 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.928 06:35:51 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:11.928 06:35:51 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.929 06:35:51 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:11.929 06:35:51 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.186 06:35:52 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:12.186 06:35:52 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:12.443 06:35:52 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.009 06:35:52 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:13.009 06:35:52 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.009 06:35:52 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:13.009 06:35:52 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.271 06:35:53 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:13.271 06:35:53 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:13.529 06:35:53 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.786 06:35:53 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:13.786 06:35:53 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.352 06:35:53 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.352 06:35:53 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.352 06:35:54 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.610 [2024-07-12 06:35:54.408380] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.610 06:35:54 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:14.868 06:35:54 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:15.126 06:35:54 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:15.383 06:35:55 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:15.383 06:35:55 -- common/autotest_common.sh@1177 -- # local i=0 00:11:15.383 06:35:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.383 06:35:55 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:11:15.383 06:35:55 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:11:15.383 06:35:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:17.323 06:35:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:17.323 06:35:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:17.323 06:35:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.323 06:35:57 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:11:17.323 06:35:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.323 06:35:57 -- common/autotest_common.sh@1187 -- # return 0 00:11:17.323 06:35:57 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:17.323 [global] 00:11:17.323 thread=1 00:11:17.323 invalidate=1 00:11:17.323 rw=write 00:11:17.323 time_based=1 00:11:17.323 runtime=1 00:11:17.323 ioengine=libaio 00:11:17.323 direct=1 00:11:17.323 bs=4096 00:11:17.323 iodepth=1 00:11:17.323 norandommap=0 00:11:17.323 numjobs=1 00:11:17.323 00:11:17.323 verify_dump=1 00:11:17.323 verify_backlog=512 00:11:17.323 verify_state_save=0 00:11:17.323 do_verify=1 00:11:17.323 verify=crc32c-intel 00:11:17.323 [job0] 00:11:17.323 filename=/dev/nvme0n1 00:11:17.323 [job1] 00:11:17.323 filename=/dev/nvme0n2 00:11:17.323 [job2] 00:11:17.323 filename=/dev/nvme0n3 00:11:17.323 [job3] 00:11:17.323 filename=/dev/nvme0n4 00:11:17.323 Could not set queue depth (nvme0n1) 00:11:17.323 Could not set queue depth (nvme0n2) 00:11:17.323 Could not set queue depth (nvme0n3) 00:11:17.323 Could not set queue depth (nvme0n4) 00:11:17.581 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.581 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.581 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.581 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.581 fio-3.35 00:11:17.581 Starting 4 threads 00:11:18.955 00:11:18.955 job0: (groupid=0, jobs=1): err= 0: pid=75319: Fri Jul 12 06:35:58 2024 00:11:18.955 read: IOPS=2756, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:11:18.955 slat (nsec): min=12065, max=42809, avg=15809.10, stdev=2961.99 00:11:18.955 clat (usec): min=134, max=324, avg=169.30, stdev=18.26 00:11:18.955 lat (usec): min=149, max=339, avg=185.10, stdev=18.26 00:11:18.955 clat percentiles (usec): 00:11:18.955 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:11:18.955 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:11:18.955 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 202], 00:11:18.955 | 99.00th=[ 243], 99.50th=[ 258], 99.90th=[ 289], 99.95th=[ 297], 00:11:18.955 | 99.99th=[ 326] 00:11:18.955 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:18.955 slat (nsec): min=13955, max=92825, avg=22070.11, stdev=4966.61 00:11:18.955 clat (usec): min=94, max=1693, avg=133.75, stdev=36.43 00:11:18.955 lat (usec): min=113, max=1715, avg=155.82, stdev=36.47 00:11:18.955 clat percentiles (usec): 00:11:18.955 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 119], 00:11:18.955 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 130], 60.00th=[ 133], 00:11:18.955 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 159], 95.00th=[ 174], 00:11:18.955 | 99.00th=[ 198], 99.50th=[ 215], 99.90th=[ 310], 99.95th=[ 783], 00:11:18.955 | 99.99th=[ 1696] 00:11:18.955 bw ( KiB/s): min=12288, max=12288, per=25.65%, avg=12288.00, stdev= 0.00, samples=1 00:11:18.955 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:18.955 lat (usec) : 100=0.24%, 250=99.37%, 500=0.36%, 1000=0.02% 00:11:18.955 lat (msec) : 2=0.02% 00:11:18.955 cpu : usr=2.00%, sys=9.00%, ctx=5831, majf=0, minf=9 00:11:18.955 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.955 issued rwts: total=2759,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.955 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.955 job1: (groupid=0, jobs=1): err= 0: pid=75320: Fri Jul 12 06:35:58 2024 00:11:18.955 read: IOPS=2690, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:11:18.955 slat (nsec): min=12687, max=38766, avg=16080.44, stdev=2728.36 00:11:18.955 clat (usec): min=132, max=388, avg=171.09, stdev=18.06 00:11:18.955 lat (usec): min=147, max=415, avg=187.17, stdev=18.39 00:11:18.955 clat percentiles (usec): 00:11:18.955 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:11:18.955 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:11:18.955 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 200], 00:11:18.955 | 99.00th=[ 223], 99.50th=[ 241], 99.90th=[ 375], 99.95th=[ 379], 00:11:18.955 | 99.99th=[ 388] 00:11:18.955 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:18.955 slat (usec): min=14, max=138, avg=22.51, stdev= 5.07 00:11:18.955 clat (usec): min=95, max=1699, avg=135.44, stdev=34.12 00:11:18.955 lat (usec): min=114, max=1717, avg=157.95, stdev=34.96 00:11:18.955 clat percentiles (usec): 00:11:18.955 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 117], 20.00th=[ 121], 00:11:18.955 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 137], 00:11:18.955 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 163], 00:11:18.955 | 99.00th=[ 182], 99.50th=[ 196], 99.90th=[ 359], 99.95th=[ 383], 00:11:18.955 | 99.99th=[ 1696] 00:11:18.955 bw ( KiB/s): min=12288, max=12288, per=25.65%, avg=12288.00, stdev= 0.00, samples=1 00:11:18.955 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:18.956 lat (usec) : 100=0.17%, 250=99.46%, 500=0.35% 00:11:18.956 lat (msec) : 2=0.02% 00:11:18.956 cpu : usr=1.80%, sys=9.20%, ctx=5766, majf=0, minf=8 00:11:18.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.956 issued rwts: total=2693,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.956 job2: (groupid=0, jobs=1): err= 0: pid=75321: Fri Jul 12 06:35:58 2024 00:11:18.956 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:18.956 slat (nsec): min=11847, max=46215, avg=16333.04, stdev=3713.93 00:11:18.956 clat (usec): min=141, max=239, avg=177.63, stdev=13.95 00:11:18.956 lat (usec): min=155, max=253, avg=193.96, stdev=14.29 00:11:18.956 clat percentiles (usec): 00:11:18.956 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:11:18.956 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:11:18.956 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:11:18.956 | 99.00th=[ 217], 99.50th=[ 223], 99.90th=[ 229], 99.95th=[ 231], 00:11:18.956 | 99.99th=[ 239] 00:11:18.956 write: IOPS=3027, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec); 0 zone resets 00:11:18.956 slat (nsec): min=15243, max=94423, avg=22674.62, stdev=6031.07 00:11:18.956 clat (usec): min=99, max=651, avg=139.86, stdev=22.37 00:11:18.956 lat (usec): min=118, max=684, avg=162.53, stdev=23.18 00:11:18.956 clat percentiles (usec): 00:11:18.956 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 128], 00:11:18.956 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:11:18.956 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:11:18.956 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 502], 99.95th=[ 594], 00:11:18.956 | 99.99th=[ 652] 00:11:18.956 bw ( KiB/s): min=12312, max=12312, per=25.70%, avg=12312.00, stdev= 0.00, samples=1 00:11:18.956 iops : min= 3078, max= 3078, avg=3078.00, stdev= 0.00, samples=1 00:11:18.956 lat (usec) : 100=0.04%, 250=99.84%, 500=0.05%, 750=0.07% 00:11:18.956 cpu : usr=3.10%, sys=7.90%, ctx=5595, majf=0, minf=9 00:11:18.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.956 issued rwts: total=2560,3031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.956 job3: (groupid=0, jobs=1): err= 0: pid=75322: Fri Jul 12 06:35:58 2024 00:11:18.956 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:18.956 slat (nsec): min=12690, max=48989, avg=17943.17, stdev=4405.48 00:11:18.956 clat (usec): min=145, max=270, avg=178.76, stdev=13.48 00:11:18.956 lat (usec): min=159, max=294, avg=196.70, stdev=13.88 00:11:18.956 clat percentiles (usec): 00:11:18.956 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:11:18.956 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:11:18.956 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 202], 00:11:18.956 | 99.00th=[ 217], 99.50th=[ 223], 99.90th=[ 243], 99.95th=[ 262], 00:11:18.956 | 99.99th=[ 269] 00:11:18.956 write: IOPS=2809, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec); 0 zone resets 00:11:18.956 slat (usec): min=16, max=107, avg=26.52, stdev= 7.35 00:11:18.956 clat (usec): min=99, max=7453, avg=145.92, stdev=170.05 00:11:18.956 lat (usec): min=120, max=7475, avg=172.44, stdev=170.21 00:11:18.956 clat percentiles (usec): 00:11:18.956 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 128], 00:11:18.956 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:11:18.956 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:11:18.956 | 99.00th=[ 178], 99.50th=[ 198], 99.90th=[ 2507], 99.95th=[ 3490], 00:11:18.956 | 99.99th=[ 7439] 00:11:18.956 bw ( KiB/s): min=12288, max=12288, per=25.65%, avg=12288.00, stdev= 0.00, samples=1 00:11:18.956 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:18.956 lat (usec) : 100=0.02%, 250=99.74%, 500=0.06%, 750=0.02%, 1000=0.04% 00:11:18.956 lat (msec) : 2=0.06%, 4=0.06%, 10=0.02% 00:11:18.956 cpu : usr=2.60%, sys=9.30%, ctx=5372, majf=0, minf=9 00:11:18.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.956 issued rwts: total=2560,2812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.956 00:11:18.956 Run status group 0 (all jobs): 00:11:18.956 READ: bw=41.3MiB/s (43.3MB/s), 9.99MiB/s-10.8MiB/s (10.5MB/s-11.3MB/s), io=41.3MiB (43.3MB), run=1001-1001msec 00:11:18.956 WRITE: bw=46.8MiB/s (49.0MB/s), 11.0MiB/s-12.0MiB/s (11.5MB/s-12.6MB/s), io=46.8MiB (49.1MB), run=1001-1001msec 00:11:18.956 00:11:18.956 Disk stats (read/write): 00:11:18.956 nvme0n1: ios=2490/2560, merge=0/0, ticks=440/361, in_queue=801, util=87.79% 00:11:18.956 nvme0n2: ios=2420/2560, merge=0/0, ticks=442/374, in_queue=816, util=88.78% 00:11:18.956 nvme0n3: ios=2266/2560, merge=0/0, ticks=409/368, in_queue=777, util=89.27% 00:11:18.956 nvme0n4: ios=2085/2560, merge=0/0, ticks=378/381, in_queue=759, util=88.89% 00:11:18.956 06:35:58 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:18.956 [global] 00:11:18.956 thread=1 00:11:18.956 invalidate=1 00:11:18.956 rw=randwrite 00:11:18.956 time_based=1 00:11:18.956 runtime=1 00:11:18.956 ioengine=libaio 00:11:18.956 direct=1 00:11:18.956 bs=4096 00:11:18.956 iodepth=1 00:11:18.956 norandommap=0 00:11:18.956 numjobs=1 00:11:18.956 00:11:18.956 verify_dump=1 00:11:18.956 verify_backlog=512 00:11:18.956 verify_state_save=0 00:11:18.956 do_verify=1 00:11:18.956 verify=crc32c-intel 00:11:18.956 [job0] 00:11:18.956 filename=/dev/nvme0n1 00:11:18.956 [job1] 00:11:18.956 filename=/dev/nvme0n2 00:11:18.956 [job2] 00:11:18.956 filename=/dev/nvme0n3 00:11:18.956 [job3] 00:11:18.956 filename=/dev/nvme0n4 00:11:18.956 Could not set queue depth (nvme0n1) 00:11:18.956 Could not set queue depth (nvme0n2) 00:11:18.956 Could not set queue depth (nvme0n3) 00:11:18.956 Could not set queue depth (nvme0n4) 00:11:18.956 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.956 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.956 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.956 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.956 fio-3.35 00:11:18.956 Starting 4 threads 00:11:19.891 00:11:19.891 job0: (groupid=0, jobs=1): err= 0: pid=75375: Fri Jul 12 06:35:59 2024 00:11:19.891 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:19.891 slat (nsec): min=14054, max=90059, avg=23274.82, stdev=8032.99 00:11:19.891 clat (usec): min=141, max=1402, avg=289.03, stdev=70.11 00:11:19.891 lat (usec): min=157, max=1419, avg=312.31, stdev=74.23 00:11:19.891 clat percentiles (usec): 00:11:19.891 | 1.00th=[ 169], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 249], 00:11:19.891 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:11:19.891 | 70.00th=[ 285], 80.00th=[ 326], 90.00th=[ 388], 95.00th=[ 416], 00:11:19.891 | 99.00th=[ 486], 99.50th=[ 545], 99.90th=[ 881], 99.95th=[ 1401], 00:11:19.891 | 99.99th=[ 1401] 00:11:19.891 write: IOPS=1981, BW=7924KiB/s (8114kB/s)(7932KiB/1001msec); 0 zone resets 00:11:19.891 slat (usec): min=16, max=2620, avg=35.32, stdev=59.81 00:11:19.891 clat (usec): min=4, max=2231, avg=222.06, stdev=75.61 00:11:19.891 lat (usec): min=123, max=2624, avg=257.38, stdev=97.74 00:11:19.891 clat percentiles (usec): 00:11:19.891 | 1.00th=[ 111], 5.00th=[ 127], 10.00th=[ 155], 20.00th=[ 186], 00:11:19.891 | 30.00th=[ 196], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 221], 00:11:19.891 | 70.00th=[ 231], 80.00th=[ 245], 90.00th=[ 314], 95.00th=[ 334], 00:11:19.891 | 99.00th=[ 371], 99.50th=[ 404], 99.90th=[ 1020], 99.95th=[ 2245], 00:11:19.891 | 99.99th=[ 2245] 00:11:19.891 bw ( KiB/s): min= 8192, max= 8192, per=20.46%, avg=8192.00, stdev= 0.00, samples=1 00:11:19.891 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:19.891 lat (usec) : 10=0.03%, 100=0.06%, 250=55.24%, 500=44.25%, 750=0.31% 00:11:19.891 lat (usec) : 1000=0.03% 00:11:19.891 lat (msec) : 2=0.06%, 4=0.03% 00:11:19.891 cpu : usr=2.60%, sys=7.70%, ctx=3519, majf=0, minf=13 00:11:19.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.891 issued rwts: total=1536,1983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.891 job1: (groupid=0, jobs=1): err= 0: pid=75376: Fri Jul 12 06:35:59 2024 00:11:19.891 read: IOPS=2698, BW=10.5MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:11:19.891 slat (usec): min=12, max=105, avg=18.58, stdev= 5.67 00:11:19.891 clat (usec): min=115, max=654, avg=171.64, stdev=20.56 00:11:19.891 lat (usec): min=149, max=671, avg=190.22, stdev=21.47 00:11:19.891 clat percentiles (usec): 00:11:19.891 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:11:19.891 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:11:19.891 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:11:19.891 | 99.00th=[ 217], 99.50th=[ 235], 99.90th=[ 404], 99.95th=[ 586], 00:11:19.891 | 99.99th=[ 652] 00:11:19.891 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:19.891 slat (usec): min=15, max=126, avg=25.81, stdev= 7.98 00:11:19.891 clat (usec): min=91, max=446, avg=128.36, stdev=18.21 00:11:19.891 lat (usec): min=111, max=471, avg=154.17, stdev=20.53 00:11:19.891 clat percentiles (usec): 00:11:19.891 | 1.00th=[ 101], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 118], 00:11:19.891 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 131], 00:11:19.891 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 151], 00:11:19.891 | 99.00th=[ 172], 99.50th=[ 192], 99.90th=[ 392], 99.95th=[ 420], 00:11:19.891 | 99.99th=[ 449] 00:11:19.892 bw ( KiB/s): min=12288, max=12288, per=30.69%, avg=12288.00, stdev= 0.00, samples=1 00:11:19.892 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:19.892 lat (usec) : 100=0.42%, 250=99.26%, 500=0.29%, 750=0.03% 00:11:19.892 cpu : usr=2.70%, sys=10.10%, ctx=5782, majf=0, minf=5 00:11:19.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.892 issued rwts: total=2701,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.892 job2: (groupid=0, jobs=1): err= 0: pid=75377: Fri Jul 12 06:35:59 2024 00:11:19.892 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:19.892 slat (nsec): min=11278, max=37671, avg=14490.57, stdev=2519.02 00:11:19.892 clat (usec): min=138, max=953, avg=183.22, stdev=23.09 00:11:19.892 lat (usec): min=155, max=966, avg=197.71, stdev=23.29 00:11:19.892 clat percentiles (usec): 00:11:19.892 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:11:19.892 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:11:19.892 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 215], 00:11:19.892 | 99.00th=[ 231], 99.50th=[ 237], 99.90th=[ 297], 99.95th=[ 297], 00:11:19.892 | 99.99th=[ 955] 00:11:19.892 write: IOPS=2915, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:11:19.892 slat (usec): min=13, max=150, avg=21.54, stdev= 6.04 00:11:19.892 clat (usec): min=81, max=2193, avg=144.23, stdev=49.91 00:11:19.892 lat (usec): min=125, max=2228, avg=165.77, stdev=50.52 00:11:19.892 clat percentiles (usec): 00:11:19.892 | 1.00th=[ 112], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 128], 00:11:19.892 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:11:19.892 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 178], 00:11:19.892 | 99.00th=[ 206], 99.50th=[ 225], 99.90th=[ 562], 99.95th=[ 1385], 00:11:19.892 | 99.99th=[ 2180] 00:11:19.892 bw ( KiB/s): min=12288, max=12288, per=30.69%, avg=12288.00, stdev= 0.00, samples=1 00:11:19.892 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:19.892 lat (usec) : 100=0.02%, 250=99.73%, 500=0.18%, 750=0.02%, 1000=0.02% 00:11:19.892 lat (msec) : 2=0.02%, 4=0.02% 00:11:19.892 cpu : usr=2.80%, sys=7.40%, ctx=5479, majf=0, minf=12 00:11:19.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.892 issued rwts: total=2560,2918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.892 job3: (groupid=0, jobs=1): err= 0: pid=75378: Fri Jul 12 06:35:59 2024 00:11:19.892 read: IOPS=1797, BW=7189KiB/s (7361kB/s)(7196KiB/1001msec) 00:11:19.892 slat (usec): min=13, max=386, avg=23.01, stdev=11.29 00:11:19.892 clat (usec): min=166, max=2812, avg=282.10, stdev=87.42 00:11:19.892 lat (usec): min=185, max=2842, avg=305.11, stdev=89.57 00:11:19.892 clat percentiles (usec): 00:11:19.892 | 1.00th=[ 206], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:11:19.892 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:11:19.892 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 388], 95.00th=[ 449], 00:11:19.892 | 99.00th=[ 502], 99.50th=[ 519], 99.90th=[ 799], 99.95th=[ 2802], 00:11:19.892 | 99.99th=[ 2802] 00:11:19.892 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:19.892 slat (usec): min=18, max=122, avg=30.88, stdev=10.06 00:11:19.892 clat (usec): min=104, max=746, avg=184.21, stdev=37.79 00:11:19.892 lat (usec): min=127, max=771, avg=215.10, stdev=40.77 00:11:19.892 clat percentiles (usec): 00:11:19.892 | 1.00th=[ 116], 5.00th=[ 126], 10.00th=[ 133], 20.00th=[ 151], 00:11:19.892 | 30.00th=[ 169], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:11:19.892 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 239], 00:11:19.892 | 99.00th=[ 269], 99.50th=[ 302], 99.90th=[ 396], 99.95th=[ 482], 00:11:19.892 | 99.99th=[ 750] 00:11:19.892 bw ( KiB/s): min= 8192, max= 8192, per=20.46%, avg=8192.00, stdev= 0.00, samples=1 00:11:19.892 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:19.892 lat (usec) : 250=65.95%, 500=33.51%, 750=0.49%, 1000=0.03% 00:11:19.892 lat (msec) : 4=0.03% 00:11:19.892 cpu : usr=2.40%, sys=8.00%, ctx=3868, majf=0, minf=15 00:11:19.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.892 issued rwts: total=1799,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.892 00:11:19.892 Run status group 0 (all jobs): 00:11:19.892 READ: bw=33.5MiB/s (35.2MB/s), 6138KiB/s-10.5MiB/s (6285kB/s-11.1MB/s), io=33.6MiB (35.2MB), run=1001-1001msec 00:11:19.892 WRITE: bw=39.1MiB/s (41.0MB/s), 7924KiB/s-12.0MiB/s (8114kB/s-12.6MB/s), io=39.1MiB (41.0MB), run=1001-1001msec 00:11:19.892 00:11:19.892 Disk stats (read/write): 00:11:19.892 nvme0n1: ios=1586/1557, merge=0/0, ticks=491/344, in_queue=835, util=88.58% 00:11:19.892 nvme0n2: ios=2454/2560, merge=0/0, ticks=450/353, in_queue=803, util=89.29% 00:11:19.892 nvme0n3: ios=2155/2560, merge=0/0, ticks=408/400, in_queue=808, util=89.40% 00:11:19.892 nvme0n4: ios=1536/1726, merge=0/0, ticks=446/346, in_queue=792, util=89.87% 00:11:20.150 06:35:59 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:20.150 [global] 00:11:20.150 thread=1 00:11:20.150 invalidate=1 00:11:20.150 rw=write 00:11:20.150 time_based=1 00:11:20.150 runtime=1 00:11:20.150 ioengine=libaio 00:11:20.150 direct=1 00:11:20.150 bs=4096 00:11:20.150 iodepth=128 00:11:20.150 norandommap=0 00:11:20.150 numjobs=1 00:11:20.150 00:11:20.150 verify_dump=1 00:11:20.150 verify_backlog=512 00:11:20.150 verify_state_save=0 00:11:20.150 do_verify=1 00:11:20.150 verify=crc32c-intel 00:11:20.150 [job0] 00:11:20.150 filename=/dev/nvme0n1 00:11:20.150 [job1] 00:11:20.150 filename=/dev/nvme0n2 00:11:20.150 [job2] 00:11:20.151 filename=/dev/nvme0n3 00:11:20.151 [job3] 00:11:20.151 filename=/dev/nvme0n4 00:11:20.151 Could not set queue depth (nvme0n1) 00:11:20.151 Could not set queue depth (nvme0n2) 00:11:20.151 Could not set queue depth (nvme0n3) 00:11:20.151 Could not set queue depth (nvme0n4) 00:11:20.151 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.151 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.151 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.151 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.151 fio-3.35 00:11:20.151 Starting 4 threads 00:11:21.524 00:11:21.524 job0: (groupid=0, jobs=1): err= 0: pid=75432: Fri Jul 12 06:36:01 2024 00:11:21.524 read: IOPS=3964, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1004msec) 00:11:21.524 slat (usec): min=4, max=6558, avg=135.00, stdev=531.34 00:11:21.524 clat (usec): min=1977, max=28364, avg=17204.72, stdev=5851.29 00:11:21.524 lat (usec): min=4000, max=29163, avg=17339.72, stdev=5881.31 00:11:21.524 clat percentiles (usec): 00:11:21.524 | 1.00th=[ 8094], 5.00th=[10028], 10.00th=[10552], 20.00th=[10814], 00:11:21.524 | 30.00th=[10945], 40.00th=[12649], 50.00th=[18744], 60.00th=[21103], 00:11:21.524 | 70.00th=[22152], 80.00th=[23200], 90.00th=[23987], 95.00th=[25035], 00:11:21.524 | 99.00th=[26346], 99.50th=[27395], 99.90th=[27657], 99.95th=[28443], 00:11:21.524 | 99.99th=[28443] 00:11:21.524 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:11:21.524 slat (usec): min=6, max=5108, avg=105.55, stdev=389.13 00:11:21.524 clat (usec): min=8090, max=25762, avg=14251.13, stdev=3716.61 00:11:21.524 lat (usec): min=8846, max=25783, avg=14356.68, stdev=3728.23 00:11:21.524 clat percentiles (usec): 00:11:21.524 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[10552], 20.00th=[10814], 00:11:21.524 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[16188], 00:11:21.524 | 70.00th=[17171], 80.00th=[18220], 90.00th=[19006], 95.00th=[20055], 00:11:21.524 | 99.00th=[22152], 99.50th=[23200], 99.90th=[24773], 99.95th=[24773], 00:11:21.524 | 99.99th=[25822] 00:11:21.524 bw ( KiB/s): min=12288, max=20480, per=28.76%, avg=16384.00, stdev=5792.62, samples=2 00:11:21.524 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:11:21.524 lat (msec) : 2=0.01%, 10=3.57%, 20=71.73%, 50=24.69% 00:11:21.524 cpu : usr=3.99%, sys=10.77%, ctx=977, majf=0, minf=6 00:11:21.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:21.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.524 issued rwts: total=3980,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.524 job1: (groupid=0, jobs=1): err= 0: pid=75433: Fri Jul 12 06:36:01 2024 00:11:21.524 read: IOPS=2769, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1005msec) 00:11:21.524 slat (usec): min=4, max=6665, avg=163.99, stdev=846.00 00:11:21.524 clat (usec): min=3874, max=28158, avg=21056.06, stdev=3723.00 00:11:21.524 lat (usec): min=3895, max=28166, avg=21220.05, stdev=3653.56 00:11:21.524 clat percentiles (usec): 00:11:21.524 | 1.00th=[ 4293], 5.00th=[15795], 10.00th=[19006], 20.00th=[19530], 00:11:21.524 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:11:21.524 | 70.00th=[21890], 80.00th=[23987], 90.00th=[26870], 95.00th=[27132], 00:11:21.524 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28181], 99.95th=[28181], 00:11:21.524 | 99.99th=[28181] 00:11:21.524 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:11:21.524 slat (usec): min=9, max=6632, avg=169.48, stdev=836.32 00:11:21.524 clat (usec): min=14718, max=29025, avg=21976.55, stdev=3684.62 00:11:21.524 lat (usec): min=18493, max=29052, avg=22146.03, stdev=3616.21 00:11:21.524 clat percentiles (usec): 00:11:21.524 | 1.00th=[15533], 5.00th=[19006], 10.00th=[19268], 20.00th=[19530], 00:11:21.524 | 30.00th=[19530], 40.00th=[19792], 50.00th=[19792], 60.00th=[20055], 00:11:21.524 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[27657], 00:11:21.524 | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:11:21.524 | 99.99th=[28967] 00:11:21.524 bw ( KiB/s): min=11551, max=13048, per=21.59%, avg=12299.50, stdev=1058.54, samples=2 00:11:21.525 iops : min= 2887, max= 3262, avg=3074.50, stdev=265.17, samples=2 00:11:21.525 lat (msec) : 4=0.14%, 10=0.94%, 20=54.67%, 50=44.25% 00:11:21.525 cpu : usr=2.89%, sys=8.76%, ctx=184, majf=0, minf=9 00:11:21.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:21.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.525 issued rwts: total=2783,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.525 job2: (groupid=0, jobs=1): err= 0: pid=75435: Fri Jul 12 06:36:01 2024 00:11:21.525 read: IOPS=2776, BW=10.8MiB/s (11.4MB/s)(10.9MiB/1003msec) 00:11:21.525 slat (usec): min=6, max=6844, avg=164.64, stdev=846.13 00:11:21.525 clat (usec): min=184, max=28688, avg=21102.51, stdev=3783.12 00:11:21.525 lat (usec): min=3810, max=28697, avg=21267.14, stdev=3706.53 00:11:21.525 clat percentiles (usec): 00:11:21.525 | 1.00th=[ 4228], 5.00th=[15926], 10.00th=[19006], 20.00th=[19268], 00:11:21.525 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[20579], 00:11:21.525 | 70.00th=[21890], 80.00th=[23987], 90.00th=[26608], 95.00th=[27395], 00:11:21.525 | 99.00th=[28181], 99.50th=[28443], 99.90th=[28705], 99.95th=[28705], 00:11:21.525 | 99.99th=[28705] 00:11:21.525 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:21.525 slat (usec): min=9, max=6849, avg=170.04, stdev=843.13 00:11:21.525 clat (usec): min=14210, max=29204, avg=21898.69, stdev=3703.07 00:11:21.525 lat (usec): min=17986, max=29228, avg=22068.74, stdev=3634.36 00:11:21.525 clat percentiles (usec): 00:11:21.525 | 1.00th=[15401], 5.00th=[18482], 10.00th=[19006], 20.00th=[19268], 00:11:21.525 | 30.00th=[19530], 40.00th=[19792], 50.00th=[19792], 60.00th=[20055], 00:11:21.525 | 70.00th=[25822], 80.00th=[26870], 90.00th=[27657], 95.00th=[27657], 00:11:21.525 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:11:21.525 | 99.99th=[29230] 00:11:21.525 bw ( KiB/s): min=11784, max=12792, per=21.57%, avg=12288.00, stdev=712.76, samples=2 00:11:21.525 iops : min= 2946, max= 3198, avg=3072.00, stdev=178.19, samples=2 00:11:21.525 lat (usec) : 250=0.02% 00:11:21.525 lat (msec) : 4=0.20%, 10=0.89%, 20=49.39%, 50=49.50% 00:11:21.525 cpu : usr=1.90%, sys=8.18%, ctx=185, majf=0, minf=15 00:11:21.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:21.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.525 issued rwts: total=2785,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.525 job3: (groupid=0, jobs=1): err= 0: pid=75441: Fri Jul 12 06:36:01 2024 00:11:21.525 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:21.525 slat (usec): min=5, max=5836, avg=135.64, stdev=537.97 00:11:21.525 clat (usec): min=9363, max=26481, avg=17560.28, stdev=4489.20 00:11:21.525 lat (usec): min=10511, max=26509, avg=17695.92, stdev=4502.88 00:11:21.525 clat percentiles (usec): 00:11:21.525 | 1.00th=[10552], 5.00th=[12125], 10.00th=[12256], 20.00th=[12518], 00:11:21.525 | 30.00th=[12911], 40.00th=[14484], 50.00th=[18744], 60.00th=[20317], 00:11:21.525 | 70.00th=[21103], 80.00th=[22152], 90.00th=[22938], 95.00th=[23725], 00:11:21.525 | 99.00th=[25822], 99.50th=[26346], 99.90th=[26346], 99.95th=[26608], 00:11:21.525 | 99.99th=[26608] 00:11:21.525 write: IOPS=4058, BW=15.9MiB/s (16.6MB/s)(15.9MiB/1003msec); 0 zone resets 00:11:21.525 slat (usec): min=8, max=8553, avg=119.40, stdev=440.63 00:11:21.525 clat (usec): min=1865, max=23842, avg=15644.30, stdev=3074.01 00:11:21.525 lat (usec): min=3531, max=23859, avg=15763.71, stdev=3076.24 00:11:21.525 clat percentiles (usec): 00:11:21.525 | 1.00th=[ 7373], 5.00th=[11994], 10.00th=[12256], 20.00th=[12518], 00:11:21.525 | 30.00th=[13042], 40.00th=[14353], 50.00th=[15926], 60.00th=[17171], 00:11:21.525 | 70.00th=[17695], 80.00th=[18220], 90.00th=[19006], 95.00th=[20055], 00:11:21.525 | 99.00th=[22938], 99.50th=[23200], 99.90th=[23725], 99.95th=[23725], 00:11:21.525 | 99.99th=[23725] 00:11:21.525 bw ( KiB/s): min=13952, max=17627, per=27.72%, avg=15789.50, stdev=2598.62, samples=2 00:11:21.525 iops : min= 3488, max= 4406, avg=3947.00, stdev=649.12, samples=2 00:11:21.525 lat (msec) : 2=0.01%, 4=0.10%, 10=0.99%, 20=76.81%, 50=22.08% 00:11:21.525 cpu : usr=4.29%, sys=9.28%, ctx=1054, majf=0, minf=7 00:11:21.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:21.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.525 issued rwts: total=3584,4071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.525 00:11:21.525 Run status group 0 (all jobs): 00:11:21.525 READ: bw=51.0MiB/s (53.5MB/s), 10.8MiB/s-15.5MiB/s (11.3MB/s-16.2MB/s), io=51.3MiB (53.8MB), run=1003-1005msec 00:11:21.525 WRITE: bw=55.6MiB/s (58.3MB/s), 11.9MiB/s-15.9MiB/s (12.5MB/s-16.7MB/s), io=55.9MiB (58.6MB), run=1003-1005msec 00:11:21.525 00:11:21.525 Disk stats (read/write): 00:11:21.525 nvme0n1: ios=3552/3584, merge=0/0, ticks=13851/10681, in_queue=24532, util=87.37% 00:11:21.525 nvme0n2: ios=2344/2560, merge=0/0, ticks=11687/13358, in_queue=25045, util=87.93% 00:11:21.525 nvme0n3: ios=2304/2560, merge=0/0, ticks=10400/11238, in_queue=21638, util=88.55% 00:11:21.525 nvme0n4: ios=3104/3584, merge=0/0, ticks=12365/12219, in_queue=24584, util=89.10% 00:11:21.525 06:36:01 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:21.525 [global] 00:11:21.525 thread=1 00:11:21.525 invalidate=1 00:11:21.525 rw=randwrite 00:11:21.525 time_based=1 00:11:21.525 runtime=1 00:11:21.525 ioengine=libaio 00:11:21.525 direct=1 00:11:21.525 bs=4096 00:11:21.525 iodepth=128 00:11:21.525 norandommap=0 00:11:21.525 numjobs=1 00:11:21.525 00:11:21.525 verify_dump=1 00:11:21.525 verify_backlog=512 00:11:21.525 verify_state_save=0 00:11:21.525 do_verify=1 00:11:21.525 verify=crc32c-intel 00:11:21.525 [job0] 00:11:21.525 filename=/dev/nvme0n1 00:11:21.525 [job1] 00:11:21.525 filename=/dev/nvme0n2 00:11:21.525 [job2] 00:11:21.525 filename=/dev/nvme0n3 00:11:21.525 [job3] 00:11:21.525 filename=/dev/nvme0n4 00:11:21.525 Could not set queue depth (nvme0n1) 00:11:21.525 Could not set queue depth (nvme0n2) 00:11:21.525 Could not set queue depth (nvme0n3) 00:11:21.525 Could not set queue depth (nvme0n4) 00:11:21.525 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.525 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.525 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.525 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.525 fio-3.35 00:11:21.525 Starting 4 threads 00:11:22.901 00:11:22.901 job0: (groupid=0, jobs=1): err= 0: pid=75494: Fri Jul 12 06:36:02 2024 00:11:22.901 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:11:22.901 slat (usec): min=6, max=20136, avg=194.10, stdev=1525.68 00:11:22.901 clat (usec): min=18326, max=47012, avg=25507.26, stdev=3110.60 00:11:22.901 lat (usec): min=18343, max=48583, avg=25701.36, stdev=3384.26 00:11:22.901 clat percentiles (usec): 00:11:22.901 | 1.00th=[19268], 5.00th=[22152], 10.00th=[22938], 20.00th=[23462], 00:11:22.901 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[25035], 00:11:22.901 | 70.00th=[27132], 80.00th=[27919], 90.00th=[29230], 95.00th=[30278], 00:11:22.901 | 99.00th=[34341], 99.50th=[38011], 99.90th=[43779], 99.95th=[44827], 00:11:22.902 | 99.99th=[46924] 00:11:22.902 write: IOPS=2677, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1007msec); 0 zone resets 00:11:22.902 slat (usec): min=5, max=21150, avg=178.96, stdev=1261.11 00:11:22.902 clat (usec): min=6067, max=37093, avg=23101.44, stdev=4407.95 00:11:22.902 lat (usec): min=6093, max=37112, avg=23280.40, stdev=4269.26 00:11:22.902 clat percentiles (usec): 00:11:22.902 | 1.00th=[11076], 5.00th=[14091], 10.00th=[18220], 20.00th=[21365], 00:11:22.902 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:11:22.902 | 70.00th=[23987], 80.00th=[24249], 90.00th=[27132], 95.00th=[28443], 00:11:22.902 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:11:22.902 | 99.99th=[36963] 00:11:22.902 bw ( KiB/s): min= 8848, max=11768, per=15.62%, avg=10308.00, stdev=2064.75, samples=2 00:11:22.902 iops : min= 2212, max= 2942, avg=2577.00, stdev=516.19, samples=2 00:11:22.902 lat (msec) : 10=0.19%, 20=9.23%, 50=90.58% 00:11:22.902 cpu : usr=2.39%, sys=6.76%, ctx=131, majf=0, minf=5 00:11:22.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:22.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.902 issued rwts: total=2560,2696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.902 job1: (groupid=0, jobs=1): err= 0: pid=75495: Fri Jul 12 06:36:02 2024 00:11:22.902 read: IOPS=5741, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1003msec) 00:11:22.902 slat (usec): min=6, max=5148, avg=77.88, stdev=466.67 00:11:22.902 clat (usec): min=669, max=17792, avg=10880.85, stdev=1379.87 00:11:22.902 lat (usec): min=1903, max=21418, avg=10958.73, stdev=1383.55 00:11:22.902 clat percentiles (usec): 00:11:22.902 | 1.00th=[ 6652], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10290], 00:11:22.902 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:11:22.902 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:11:22.902 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17695], 99.95th=[17695], 00:11:22.902 | 99.99th=[17695] 00:11:22.902 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:11:22.902 slat (usec): min=8, max=6823, avg=82.22, stdev=462.87 00:11:22.902 clat (usec): min=5646, max=14889, avg=10469.56, stdev=1023.90 00:11:22.902 lat (usec): min=6881, max=14941, avg=10551.77, stdev=944.60 00:11:22.902 clat percentiles (usec): 00:11:22.902 | 1.00th=[ 6915], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:11:22.902 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:11:22.902 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11863], 00:11:22.902 | 99.00th=[13829], 99.50th=[13829], 99.90th=[14484], 99.95th=[14484], 00:11:22.902 | 99.99th=[14877] 00:11:22.902 bw ( KiB/s): min=24576, max=24576, per=37.24%, avg=24576.00, stdev= 0.00, samples=2 00:11:22.902 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:11:22.902 lat (usec) : 750=0.01% 00:11:22.902 lat (msec) : 2=0.02%, 10=16.63%, 20=83.34% 00:11:22.902 cpu : usr=4.99%, sys=16.57%, ctx=255, majf=0, minf=10 00:11:22.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:22.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.902 issued rwts: total=5759,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.902 job2: (groupid=0, jobs=1): err= 0: pid=75496: Fri Jul 12 06:36:02 2024 00:11:22.902 read: IOPS=5022, BW=19.6MiB/s (20.6MB/s)(19.8MiB/1009msec) 00:11:22.902 slat (usec): min=6, max=10706, avg=94.11, stdev=574.14 00:11:22.902 clat (usec): min=3843, max=23439, avg=12919.24, stdev=2058.23 00:11:22.902 lat (usec): min=5962, max=25348, avg=13013.35, stdev=2072.12 00:11:22.902 clat percentiles (usec): 00:11:22.902 | 1.00th=[ 7832], 5.00th=[ 9372], 10.00th=[11600], 20.00th=[11994], 00:11:22.902 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:11:22.902 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14222], 95.00th=[16319], 00:11:22.902 | 99.00th=[21103], 99.50th=[22152], 99.90th=[23200], 99.95th=[23462], 00:11:22.902 | 99.99th=[23462] 00:11:22.902 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:11:22.902 slat (usec): min=4, max=9958, avg=94.58, stdev=522.17 00:11:22.902 clat (usec): min=3300, max=23362, avg=12172.52, stdev=1580.66 00:11:22.902 lat (usec): min=3341, max=23373, avg=12267.10, stdev=1517.35 00:11:22.902 clat percentiles (usec): 00:11:22.902 | 1.00th=[ 5145], 5.00th=[10421], 10.00th=[10945], 20.00th=[11600], 00:11:22.902 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:11:22.902 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13173], 95.00th=[13829], 00:11:22.902 | 99.00th=[17695], 99.50th=[17957], 99.90th=[17957], 99.95th=[18220], 00:11:22.902 | 99.99th=[23462] 00:11:22.902 bw ( KiB/s): min=20480, max=20480, per=31.03%, avg=20480.00, stdev= 0.00, samples=2 00:11:22.902 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:11:22.902 lat (msec) : 4=0.14%, 10=5.30%, 20=93.38%, 50=1.18% 00:11:22.902 cpu : usr=4.37%, sys=15.38%, ctx=289, majf=0, minf=7 00:11:22.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:22.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.902 issued rwts: total=5068,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.902 job3: (groupid=0, jobs=1): err= 0: pid=75497: Fri Jul 12 06:36:02 2024 00:11:22.902 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:11:22.902 slat (usec): min=6, max=13784, avg=173.02, stdev=1127.96 00:11:22.902 clat (usec): min=13227, max=44307, avg=24681.79, stdev=3539.14 00:11:22.902 lat (usec): min=13237, max=49981, avg=24854.82, stdev=3489.76 00:11:22.902 clat percentiles (usec): 00:11:22.902 | 1.00th=[14222], 5.00th=[18482], 10.00th=[22414], 20.00th=[23200], 00:11:22.902 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:11:22.902 | 70.00th=[25560], 80.00th=[27395], 90.00th=[28181], 95.00th=[28443], 00:11:22.902 | 99.00th=[42206], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:11:22.902 | 99.99th=[44303] 00:11:22.902 write: IOPS=2676, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1004msec); 0 zone resets 00:11:22.902 slat (usec): min=3, max=31516, avg=199.55, stdev=1378.85 00:11:22.902 clat (usec): min=1700, max=47062, avg=23865.49, stdev=5178.16 00:11:22.902 lat (usec): min=4831, max=47080, avg=24065.05, stdev=5058.32 00:11:22.902 clat percentiles (usec): 00:11:22.902 | 1.00th=[ 5735], 5.00th=[16712], 10.00th=[21365], 20.00th=[21890], 00:11:22.902 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23462], 60.00th=[23725], 00:11:22.902 | 70.00th=[23987], 80.00th=[25822], 90.00th=[27657], 95.00th=[31851], 00:11:22.902 | 99.00th=[46924], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:11:22.902 | 99.99th=[46924] 00:11:22.902 bw ( KiB/s): min= 9269, max=11256, per=15.55%, avg=10262.50, stdev=1405.02, samples=2 00:11:22.902 iops : min= 2317, max= 2814, avg=2565.50, stdev=351.43, samples=2 00:11:22.902 lat (msec) : 2=0.02%, 10=0.91%, 20=5.57%, 50=93.50% 00:11:22.902 cpu : usr=2.29%, sys=7.78%, ctx=113, majf=0, minf=13 00:11:22.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:22.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.902 issued rwts: total=2560,2687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.902 00:11:22.902 Run status group 0 (all jobs): 00:11:22.902 READ: bw=61.7MiB/s (64.7MB/s), 9.93MiB/s-22.4MiB/s (10.4MB/s-23.5MB/s), io=62.3MiB (65.3MB), run=1003-1009msec 00:11:22.902 WRITE: bw=64.4MiB/s (67.6MB/s), 10.5MiB/s-23.9MiB/s (11.0MB/s-25.1MB/s), io=65.0MiB (68.2MB), run=1003-1009msec 00:11:22.902 00:11:22.902 Disk stats (read/write): 00:11:22.902 nvme0n1: ios=2097/2304, merge=0/0, ticks=51128/51510, in_queue=102638, util=87.06% 00:11:22.902 nvme0n2: ios=4999/5120, merge=0/0, ticks=50679/48575, in_queue=99254, util=87.21% 00:11:22.902 nvme0n3: ios=4096/4415, merge=0/0, ticks=50609/49599, in_queue=100208, util=89.10% 00:11:22.902 nvme0n4: ios=2048/2240, merge=0/0, ticks=49245/52750, in_queue=101995, util=89.34% 00:11:22.902 06:36:02 -- target/fio.sh@55 -- # sync 00:11:22.902 06:36:02 -- target/fio.sh@59 -- # fio_pid=75514 00:11:22.902 06:36:02 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:22.902 06:36:02 -- target/fio.sh@61 -- # sleep 3 00:11:22.902 [global] 00:11:22.902 thread=1 00:11:22.902 invalidate=1 00:11:22.902 rw=read 00:11:22.902 time_based=1 00:11:22.902 runtime=10 00:11:22.902 ioengine=libaio 00:11:22.902 direct=1 00:11:22.902 bs=4096 00:11:22.902 iodepth=1 00:11:22.902 norandommap=1 00:11:22.902 numjobs=1 00:11:22.902 00:11:22.902 [job0] 00:11:22.902 filename=/dev/nvme0n1 00:11:22.902 [job1] 00:11:22.902 filename=/dev/nvme0n2 00:11:22.902 [job2] 00:11:22.902 filename=/dev/nvme0n3 00:11:22.902 [job3] 00:11:22.902 filename=/dev/nvme0n4 00:11:22.902 Could not set queue depth (nvme0n1) 00:11:22.902 Could not set queue depth (nvme0n2) 00:11:22.902 Could not set queue depth (nvme0n3) 00:11:22.902 Could not set queue depth (nvme0n4) 00:11:22.902 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.902 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.902 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.902 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.902 fio-3.35 00:11:22.902 Starting 4 threads 00:11:26.212 06:36:05 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:26.212 fio: pid=75558, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:26.212 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=39407616, buflen=4096 00:11:26.212 06:36:05 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:26.470 fio: pid=75557, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:26.470 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=68673536, buflen=4096 00:11:26.470 06:36:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.470 06:36:06 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:26.728 fio: pid=75555, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:26.728 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=44244992, buflen=4096 00:11:26.728 06:36:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.728 06:36:06 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:26.986 fio: pid=75556, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:26.986 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=50733056, buflen=4096 00:11:26.986 00:11:26.986 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75555: Fri Jul 12 06:36:06 2024 00:11:26.986 read: IOPS=3075, BW=12.0MiB/s (12.6MB/s)(42.2MiB/3513msec) 00:11:26.986 slat (usec): min=8, max=13849, avg=20.60, stdev=216.31 00:11:26.986 clat (usec): min=125, max=4153, avg=302.70, stdev=88.97 00:11:26.986 lat (usec): min=137, max=14230, avg=323.30, stdev=233.56 00:11:26.986 clat percentiles (usec): 00:11:26.986 | 1.00th=[ 149], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 235], 00:11:26.986 | 30.00th=[ 255], 40.00th=[ 281], 50.00th=[ 318], 60.00th=[ 330], 00:11:26.986 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 392], 00:11:26.986 | 99.00th=[ 465], 99.50th=[ 537], 99.90th=[ 1156], 99.95th=[ 1565], 00:11:26.986 | 99.99th=[ 2147] 00:11:26.986 bw ( KiB/s): min=10592, max=14920, per=22.73%, avg=11857.33, stdev=1618.10, samples=6 00:11:26.986 iops : min= 2648, max= 3730, avg=2964.33, stdev=404.53, samples=6 00:11:26.986 lat (usec) : 250=27.74%, 500=71.61%, 750=0.42%, 1000=0.11% 00:11:26.986 lat (msec) : 2=0.08%, 4=0.02%, 10=0.01% 00:11:26.986 cpu : usr=1.31%, sys=4.78%, ctx=10835, majf=0, minf=1 00:11:26.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.986 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.986 issued rwts: total=10803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.986 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.986 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75556: Fri Jul 12 06:36:06 2024 00:11:26.986 read: IOPS=3258, BW=12.7MiB/s (13.3MB/s)(48.4MiB/3802msec) 00:11:26.986 slat (usec): min=7, max=13396, avg=23.30, stdev=232.28 00:11:26.986 clat (usec): min=125, max=4186, avg=281.73, stdev=103.05 00:11:26.986 lat (usec): min=137, max=14539, avg=305.03, stdev=258.18 00:11:26.986 clat percentiles (usec): 00:11:26.986 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 153], 20.00th=[ 210], 00:11:26.986 | 30.00th=[ 235], 40.00th=[ 255], 50.00th=[ 297], 60.00th=[ 322], 00:11:26.986 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 392], 00:11:26.986 | 99.00th=[ 519], 99.50th=[ 570], 99.90th=[ 1139], 99.95th=[ 1532], 00:11:26.986 | 99.99th=[ 3032] 00:11:26.986 bw ( KiB/s): min=10672, max=15534, per=23.67%, avg=12347.14, stdev=2034.52, samples=7 00:11:26.986 iops : min= 2668, max= 3883, avg=3086.71, stdev=508.50, samples=7 00:11:26.986 lat (usec) : 250=37.23%, 500=61.65%, 750=0.86%, 1000=0.13% 00:11:26.986 lat (msec) : 2=0.07%, 4=0.03%, 10=0.01% 00:11:26.986 cpu : usr=1.58%, sys=5.21%, ctx=12397, majf=0, minf=1 00:11:26.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.986 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.986 issued rwts: total=12387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.986 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.986 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75557: Fri Jul 12 06:36:06 2024 00:11:26.986 read: IOPS=5179, BW=20.2MiB/s (21.2MB/s)(65.5MiB/3237msec) 00:11:26.986 slat (usec): min=10, max=11498, avg=15.41, stdev=105.89 00:11:26.986 clat (usec): min=131, max=3453, avg=176.07, stdev=52.48 00:11:26.986 lat (usec): min=144, max=12019, avg=191.48, stdev=121.23 00:11:26.986 clat percentiles (usec): 00:11:26.986 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:11:26.986 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:11:26.986 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 208], 00:11:26.986 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 404], 99.95th=[ 938], 00:11:26.986 | 99.99th=[ 2573] 00:11:26.986 bw ( KiB/s): min=20936, max=21808, per=41.09%, avg=21432.00, stdev=378.46, samples=6 00:11:26.986 iops : min= 5234, max= 5452, avg=5358.00, stdev=94.62, samples=6 00:11:26.987 lat (usec) : 250=95.88%, 500=4.03%, 750=0.02%, 1000=0.01% 00:11:26.987 lat (msec) : 2=0.02%, 4=0.02% 00:11:26.987 cpu : usr=1.64%, sys=6.64%, ctx=16775, majf=0, minf=1 00:11:26.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.987 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.987 issued rwts: total=16767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.987 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75558: Fri Jul 12 06:36:06 2024 00:11:26.987 read: IOPS=3257, BW=12.7MiB/s (13.3MB/s)(37.6MiB/2954msec) 00:11:26.987 slat (nsec): min=8706, max=96973, avg=18058.49, stdev=6612.70 00:11:26.987 clat (usec): min=138, max=7670, avg=287.04, stdev=121.37 00:11:26.987 lat (usec): min=152, max=7697, avg=305.10, stdev=122.91 00:11:26.987 clat percentiles (usec): 00:11:26.987 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 186], 00:11:26.987 | 30.00th=[ 206], 40.00th=[ 293], 50.00th=[ 322], 60.00th=[ 334], 00:11:26.987 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 392], 00:11:26.987 | 99.00th=[ 457], 99.50th=[ 529], 99.90th=[ 1020], 99.95th=[ 1467], 00:11:26.987 | 99.99th=[ 7701] 00:11:26.987 bw ( KiB/s): min=10592, max=20232, per=25.65%, avg=13380.80, stdev=4063.19, samples=5 00:11:26.987 iops : min= 2648, max= 5058, avg=3345.20, stdev=1015.80, samples=5 00:11:26.987 lat (usec) : 250=38.32%, 500=60.99%, 750=0.50%, 1000=0.08% 00:11:26.987 lat (msec) : 2=0.07%, 4=0.02%, 10=0.01% 00:11:26.987 cpu : usr=1.19%, sys=5.72%, ctx=9630, majf=0, minf=1 00:11:26.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.987 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.987 issued rwts: total=9622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.987 00:11:26.987 Run status group 0 (all jobs): 00:11:26.987 READ: bw=50.9MiB/s (53.4MB/s), 12.0MiB/s-20.2MiB/s (12.6MB/s-21.2MB/s), io=194MiB (203MB), run=2954-3802msec 00:11:26.987 00:11:26.987 Disk stats (read/write): 00:11:26.987 nvme0n1: ios=10155/0, merge=0/0, ticks=3010/0, in_queue=3010, util=95.11% 00:11:26.987 nvme0n2: ios=11231/0, merge=0/0, ticks=3302/0, in_queue=3302, util=95.31% 00:11:26.987 nvme0n3: ios=16370/0, merge=0/0, ticks=2864/0, in_queue=2864, util=96.24% 00:11:26.987 nvme0n4: ios=9379/0, merge=0/0, ticks=2627/0, in_queue=2627, util=96.56% 00:11:26.987 06:36:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.987 06:36:06 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:27.244 06:36:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.244 06:36:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:27.501 06:36:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.501 06:36:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:27.793 06:36:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.793 06:36:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:28.070 06:36:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.070 06:36:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:28.327 06:36:08 -- target/fio.sh@69 -- # fio_status=0 00:11:28.327 06:36:08 -- target/fio.sh@70 -- # wait 75514 00:11:28.327 06:36:08 -- target/fio.sh@70 -- # fio_status=4 00:11:28.327 06:36:08 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.327 06:36:08 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.327 06:36:08 -- common/autotest_common.sh@1198 -- # local i=0 00:11:28.327 06:36:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:28.327 06:36:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.327 06:36:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:28.327 06:36:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.327 nvmf hotplug test: fio failed as expected 00:11:28.327 06:36:08 -- common/autotest_common.sh@1210 -- # return 0 00:11:28.327 06:36:08 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:28.327 06:36:08 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:28.327 06:36:08 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.586 06:36:08 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:28.586 06:36:08 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:28.586 06:36:08 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:28.586 06:36:08 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:28.586 06:36:08 -- target/fio.sh@91 -- # nvmftestfini 00:11:28.586 06:36:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:28.586 06:36:08 -- nvmf/common.sh@116 -- # sync 00:11:28.586 06:36:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:28.586 06:36:08 -- nvmf/common.sh@119 -- # set +e 00:11:28.586 06:36:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:28.586 06:36:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:28.586 rmmod nvme_tcp 00:11:28.586 rmmod nvme_fabrics 00:11:28.586 rmmod nvme_keyring 00:11:28.586 06:36:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:28.586 06:36:08 -- nvmf/common.sh@123 -- # set -e 00:11:28.586 06:36:08 -- nvmf/common.sh@124 -- # return 0 00:11:28.586 06:36:08 -- nvmf/common.sh@477 -- # '[' -n 75128 ']' 00:11:28.586 06:36:08 -- nvmf/common.sh@478 -- # killprocess 75128 00:11:28.586 06:36:08 -- common/autotest_common.sh@926 -- # '[' -z 75128 ']' 00:11:28.586 06:36:08 -- common/autotest_common.sh@930 -- # kill -0 75128 00:11:28.586 06:36:08 -- common/autotest_common.sh@931 -- # uname 00:11:28.586 06:36:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:28.586 06:36:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75128 00:11:28.586 killing process with pid 75128 00:11:28.586 06:36:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:28.586 06:36:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:28.586 06:36:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75128' 00:11:28.586 06:36:08 -- common/autotest_common.sh@945 -- # kill 75128 00:11:28.586 06:36:08 -- common/autotest_common.sh@950 -- # wait 75128 00:11:28.844 06:36:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:28.844 06:36:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:28.844 06:36:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:28.844 06:36:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:28.844 06:36:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:28.844 06:36:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.844 06:36:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.844 06:36:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.844 06:36:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:28.844 00:11:28.844 real 0m19.503s 00:11:28.844 user 1m14.170s 00:11:28.844 sys 0m10.284s 00:11:28.844 06:36:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.844 06:36:08 -- common/autotest_common.sh@10 -- # set +x 00:11:28.844 ************************************ 00:11:28.844 END TEST nvmf_fio_target 00:11:28.844 ************************************ 00:11:28.844 06:36:08 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:28.844 06:36:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:28.844 06:36:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.844 06:36:08 -- common/autotest_common.sh@10 -- # set +x 00:11:28.844 ************************************ 00:11:28.844 START TEST nvmf_bdevio 00:11:28.844 ************************************ 00:11:28.844 06:36:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:29.103 * Looking for test storage... 00:11:29.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:29.103 06:36:08 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.103 06:36:08 -- nvmf/common.sh@7 -- # uname -s 00:11:29.103 06:36:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.103 06:36:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.103 06:36:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.103 06:36:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.103 06:36:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.103 06:36:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.103 06:36:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.103 06:36:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.103 06:36:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.103 06:36:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.103 06:36:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:11:29.103 06:36:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:11:29.103 06:36:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.103 06:36:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.103 06:36:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:29.103 06:36:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.103 06:36:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.103 06:36:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.103 06:36:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.103 06:36:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.103 06:36:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.103 06:36:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.103 06:36:08 -- paths/export.sh@5 -- # export PATH 00:11:29.103 06:36:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.103 06:36:08 -- nvmf/common.sh@46 -- # : 0 00:11:29.103 06:36:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:29.103 06:36:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:29.103 06:36:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:29.103 06:36:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.103 06:36:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.103 06:36:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:29.103 06:36:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:29.103 06:36:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:29.103 06:36:08 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.103 06:36:08 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.103 06:36:08 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:29.103 06:36:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:29.103 06:36:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.103 06:36:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:29.103 06:36:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:29.103 06:36:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:29.103 06:36:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.103 06:36:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.103 06:36:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.103 06:36:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:29.103 06:36:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:29.103 06:36:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:29.103 06:36:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:29.103 06:36:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:29.103 06:36:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:29.103 06:36:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.103 06:36:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.103 06:36:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:29.103 06:36:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:29.103 06:36:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:29.103 06:36:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:29.103 06:36:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:29.103 06:36:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.103 06:36:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:29.103 06:36:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:29.103 06:36:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:29.103 06:36:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:29.103 06:36:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:29.103 06:36:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:29.103 Cannot find device "nvmf_tgt_br" 00:11:29.103 06:36:08 -- nvmf/common.sh@154 -- # true 00:11:29.103 06:36:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.103 Cannot find device "nvmf_tgt_br2" 00:11:29.103 06:36:08 -- nvmf/common.sh@155 -- # true 00:11:29.103 06:36:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:29.103 06:36:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:29.103 Cannot find device "nvmf_tgt_br" 00:11:29.103 06:36:08 -- nvmf/common.sh@157 -- # true 00:11:29.103 06:36:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:29.103 Cannot find device "nvmf_tgt_br2" 00:11:29.103 06:36:08 -- nvmf/common.sh@158 -- # true 00:11:29.103 06:36:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:29.103 06:36:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:29.103 06:36:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.103 06:36:08 -- nvmf/common.sh@161 -- # true 00:11:29.103 06:36:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.103 06:36:08 -- nvmf/common.sh@162 -- # true 00:11:29.103 06:36:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:29.103 06:36:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:29.103 06:36:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:29.103 06:36:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:29.103 06:36:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:29.103 06:36:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:29.103 06:36:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:29.103 06:36:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:29.103 06:36:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:29.103 06:36:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:29.103 06:36:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:29.362 06:36:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:29.362 06:36:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:29.362 06:36:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:29.362 06:36:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:29.362 06:36:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:29.362 06:36:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:29.362 06:36:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:29.362 06:36:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:29.362 06:36:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:29.362 06:36:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:29.362 06:36:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:29.362 06:36:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:29.362 06:36:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:29.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:11:29.362 00:11:29.362 --- 10.0.0.2 ping statistics --- 00:11:29.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.362 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:11:29.362 06:36:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:29.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:29.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:29.362 00:11:29.362 --- 10.0.0.3 ping statistics --- 00:11:29.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.362 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:29.362 06:36:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:29.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:11:29.362 00:11:29.362 --- 10.0.0.1 ping statistics --- 00:11:29.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.362 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:29.362 06:36:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.362 06:36:09 -- nvmf/common.sh@421 -- # return 0 00:11:29.362 06:36:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:29.362 06:36:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.362 06:36:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:29.362 06:36:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:29.362 06:36:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.362 06:36:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:29.362 06:36:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:29.362 06:36:09 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:29.362 06:36:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:29.362 06:36:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:29.362 06:36:09 -- common/autotest_common.sh@10 -- # set +x 00:11:29.362 06:36:09 -- nvmf/common.sh@469 -- # nvmfpid=75819 00:11:29.362 06:36:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:29.362 06:36:09 -- nvmf/common.sh@470 -- # waitforlisten 75819 00:11:29.362 06:36:09 -- common/autotest_common.sh@819 -- # '[' -z 75819 ']' 00:11:29.362 06:36:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.362 06:36:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:29.362 06:36:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.362 06:36:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:29.362 06:36:09 -- common/autotest_common.sh@10 -- # set +x 00:11:29.362 [2024-07-12 06:36:09.204727] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:29.362 [2024-07-12 06:36:09.204832] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.620 [2024-07-12 06:36:09.348770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.620 [2024-07-12 06:36:09.391187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:29.620 [2024-07-12 06:36:09.391384] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.620 [2024-07-12 06:36:09.391399] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.620 [2024-07-12 06:36:09.391410] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.620 [2024-07-12 06:36:09.391569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:29.620 [2024-07-12 06:36:09.391948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:29.620 [2024-07-12 06:36:09.392166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:29.620 [2024-07-12 06:36:09.392169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.555 06:36:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:30.555 06:36:10 -- common/autotest_common.sh@852 -- # return 0 00:11:30.555 06:36:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:30.555 06:36:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:30.555 06:36:10 -- common/autotest_common.sh@10 -- # set +x 00:11:30.555 06:36:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.555 06:36:10 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.555 06:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.555 06:36:10 -- common/autotest_common.sh@10 -- # set +x 00:11:30.555 [2024-07-12 06:36:10.200651] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.555 06:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.555 06:36:10 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:30.555 06:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.555 06:36:10 -- common/autotest_common.sh@10 -- # set +x 00:11:30.555 Malloc0 00:11:30.555 06:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.555 06:36:10 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:30.555 06:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.555 06:36:10 -- common/autotest_common.sh@10 -- # set +x 00:11:30.555 06:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.555 06:36:10 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.555 06:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.555 06:36:10 -- common/autotest_common.sh@10 -- # set +x 00:11:30.555 06:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.555 06:36:10 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.555 06:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.555 06:36:10 -- common/autotest_common.sh@10 -- # set +x 00:11:30.555 [2024-07-12 06:36:10.259738] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.556 06:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.556 06:36:10 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:30.556 06:36:10 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:30.556 06:36:10 -- nvmf/common.sh@520 -- # config=() 00:11:30.556 06:36:10 -- nvmf/common.sh@520 -- # local subsystem config 00:11:30.556 06:36:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:30.556 06:36:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:30.556 { 00:11:30.556 "params": { 00:11:30.556 "name": "Nvme$subsystem", 00:11:30.556 "trtype": "$TEST_TRANSPORT", 00:11:30.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:30.556 "adrfam": "ipv4", 00:11:30.556 "trsvcid": "$NVMF_PORT", 00:11:30.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:30.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:30.556 "hdgst": ${hdgst:-false}, 00:11:30.556 "ddgst": ${ddgst:-false} 00:11:30.556 }, 00:11:30.556 "method": "bdev_nvme_attach_controller" 00:11:30.556 } 00:11:30.556 EOF 00:11:30.556 )") 00:11:30.556 06:36:10 -- nvmf/common.sh@542 -- # cat 00:11:30.556 06:36:10 -- nvmf/common.sh@544 -- # jq . 00:11:30.556 06:36:10 -- nvmf/common.sh@545 -- # IFS=, 00:11:30.556 06:36:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:30.556 "params": { 00:11:30.556 "name": "Nvme1", 00:11:30.556 "trtype": "tcp", 00:11:30.556 "traddr": "10.0.0.2", 00:11:30.556 "adrfam": "ipv4", 00:11:30.556 "trsvcid": "4420", 00:11:30.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:30.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:30.556 "hdgst": false, 00:11:30.556 "ddgst": false 00:11:30.556 }, 00:11:30.556 "method": "bdev_nvme_attach_controller" 00:11:30.556 }' 00:11:30.556 [2024-07-12 06:36:10.312321] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:30.556 [2024-07-12 06:36:10.312400] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75861 ] 00:11:30.815 [2024-07-12 06:36:10.480154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:30.815 [2024-07-12 06:36:10.530721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.815 [2024-07-12 06:36:10.530851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.815 [2024-07-12 06:36:10.530857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.815 [2024-07-12 06:36:10.670803] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:30.815 [2024-07-12 06:36:10.670848] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:30.815 I/O targets: 00:11:30.815 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:30.815 00:11:30.815 00:11:30.815 CUnit - A unit testing framework for C - Version 2.1-3 00:11:30.815 http://cunit.sourceforge.net/ 00:11:30.815 00:11:30.815 00:11:30.815 Suite: bdevio tests on: Nvme1n1 00:11:30.815 Test: blockdev write read block ...passed 00:11:30.815 Test: blockdev write zeroes read block ...passed 00:11:30.815 Test: blockdev write zeroes read no split ...passed 00:11:30.815 Test: blockdev write zeroes read split ...passed 00:11:30.815 Test: blockdev write zeroes read split partial ...passed 00:11:30.815 Test: blockdev reset ...[2024-07-12 06:36:10.705908] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:30.815 [2024-07-12 06:36:10.706044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d5350 (9): Bad file descriptor 00:11:30.815 [2024-07-12 06:36:10.720174] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:30.815 passed 00:11:30.815 Test: blockdev write read 8 blocks ...passed 00:11:30.815 Test: blockdev write read size > 128k ...passed 00:11:30.815 Test: blockdev write read invalid size ...passed 00:11:30.815 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:30.815 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:30.815 Test: blockdev write read max offset ...passed 00:11:30.815 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:30.815 Test: blockdev writev readv 8 blocks ...passed 00:11:30.815 Test: blockdev writev readv 30 x 1block ...passed 00:11:30.815 Test: blockdev writev readv block ...passed 00:11:30.815 Test: blockdev writev readv size > 128k ...passed 00:11:30.815 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:30.815 Test: blockdev comparev and writev ...[2024-07-12 06:36:10.728190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.815 [2024-07-12 06:36:10.728387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:30.815 [2024-07-12 06:36:10.728507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.815 [2024-07-12 06:36:10.728621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:30.815 [2024-07-12 06:36:10.729119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.815 [2024-07-12 06:36:10.729254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:30.815 [2024-07-12 06:36:10.729373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.815 [2024-07-12 06:36:10.729483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:30.815 [2024-07-12 06:36:10.730212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.815 [2024-07-12 06:36:10.730335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:30.815 [2024-07-12 06:36:10.730442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.815 [2024-07-12 06:36:10.730564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:30.815 [2024-07-12 06:36:10.731163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.815 [2024-07-12 06:36:10.731307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:30.815 [2024-07-12 06:36:10.731421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.815 [2024-07-12 06:36:10.731542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:30.815 passed 00:11:30.815 Test: blockdev nvme passthru rw ...passed 00:11:30.815 Test: blockdev nvme passthru vendor specific ...[2024-07-12 06:36:10.732466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.815 [2024-07-12 06:36:10.732605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:30.815 [2024-07-12 06:36:10.732855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.815 [2024-07-12 06:36:10.732993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:31.074 [2024-07-12 06:36:10.733225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.074 [2024-07-12 06:36:10.733362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:31.074 [2024-07-12 06:36:10.733563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.074 [2024-07-12 06:36:10.733663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:31.074 passed 00:11:31.074 Test: blockdev nvme admin passthru ...passed 00:11:31.074 Test: blockdev copy ...passed 00:11:31.074 00:11:31.074 Run Summary: Type Total Ran Passed Failed Inactive 00:11:31.074 suites 1 1 n/a 0 0 00:11:31.074 tests 23 23 23 0 0 00:11:31.074 asserts 152 152 152 0 n/a 00:11:31.074 00:11:31.074 Elapsed time = 0.150 seconds 00:11:31.074 06:36:10 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.074 06:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.074 06:36:10 -- common/autotest_common.sh@10 -- # set +x 00:11:31.074 06:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.074 06:36:10 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:31.074 06:36:10 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:31.074 06:36:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:31.074 06:36:10 -- nvmf/common.sh@116 -- # sync 00:11:31.074 06:36:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:31.074 06:36:10 -- nvmf/common.sh@119 -- # set +e 00:11:31.074 06:36:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:31.074 06:36:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:31.074 rmmod nvme_tcp 00:11:31.074 rmmod nvme_fabrics 00:11:31.074 rmmod nvme_keyring 00:11:31.074 06:36:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:31.333 06:36:10 -- nvmf/common.sh@123 -- # set -e 00:11:31.333 06:36:10 -- nvmf/common.sh@124 -- # return 0 00:11:31.333 06:36:10 -- nvmf/common.sh@477 -- # '[' -n 75819 ']' 00:11:31.333 06:36:10 -- nvmf/common.sh@478 -- # killprocess 75819 00:11:31.333 06:36:10 -- common/autotest_common.sh@926 -- # '[' -z 75819 ']' 00:11:31.333 06:36:10 -- common/autotest_common.sh@930 -- # kill -0 75819 00:11:31.333 06:36:10 -- common/autotest_common.sh@931 -- # uname 00:11:31.333 06:36:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:31.333 06:36:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75819 00:11:31.333 killing process with pid 75819 00:11:31.333 06:36:11 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:11:31.333 06:36:11 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:11:31.333 06:36:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75819' 00:11:31.333 06:36:11 -- common/autotest_common.sh@945 -- # kill 75819 00:11:31.333 06:36:11 -- common/autotest_common.sh@950 -- # wait 75819 00:11:31.333 06:36:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:31.333 06:36:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:31.333 06:36:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:31.333 06:36:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:31.333 06:36:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:31.333 06:36:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.333 06:36:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.333 06:36:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.333 06:36:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:31.333 00:11:31.333 real 0m2.513s 00:11:31.333 user 0m8.270s 00:11:31.333 sys 0m0.646s 00:11:31.333 06:36:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.333 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:11:31.333 ************************************ 00:11:31.333 END TEST nvmf_bdevio 00:11:31.333 ************************************ 00:11:31.592 06:36:11 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:11:31.592 06:36:11 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:31.592 06:36:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:31.592 06:36:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:31.592 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:11:31.592 ************************************ 00:11:31.592 START TEST nvmf_bdevio_no_huge 00:11:31.592 ************************************ 00:11:31.592 06:36:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:31.592 * Looking for test storage... 00:11:31.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:31.592 06:36:11 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:31.592 06:36:11 -- nvmf/common.sh@7 -- # uname -s 00:11:31.592 06:36:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.592 06:36:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.592 06:36:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.592 06:36:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.592 06:36:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.592 06:36:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.592 06:36:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.592 06:36:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.592 06:36:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.592 06:36:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.592 06:36:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:11:31.592 06:36:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:11:31.592 06:36:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.592 06:36:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.592 06:36:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:31.592 06:36:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:31.592 06:36:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.592 06:36:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.592 06:36:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.592 06:36:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.592 06:36:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.592 06:36:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.592 06:36:11 -- paths/export.sh@5 -- # export PATH 00:11:31.592 06:36:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.592 06:36:11 -- nvmf/common.sh@46 -- # : 0 00:11:31.592 06:36:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:31.592 06:36:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:31.592 06:36:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:31.592 06:36:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.592 06:36:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.592 06:36:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:31.592 06:36:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:31.592 06:36:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:31.592 06:36:11 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.593 06:36:11 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.593 06:36:11 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:31.593 06:36:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:31.593 06:36:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.593 06:36:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:31.593 06:36:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:31.593 06:36:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:31.593 06:36:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.593 06:36:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.593 06:36:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.593 06:36:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:31.593 06:36:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:31.593 06:36:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:31.593 06:36:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:31.593 06:36:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:31.593 06:36:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:31.593 06:36:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.593 06:36:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.593 06:36:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:31.593 06:36:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:31.593 06:36:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:31.593 06:36:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:31.593 06:36:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:31.593 06:36:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.593 06:36:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:31.593 06:36:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:31.593 06:36:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:31.593 06:36:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:31.593 06:36:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:31.593 06:36:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:31.593 Cannot find device "nvmf_tgt_br" 00:11:31.593 06:36:11 -- nvmf/common.sh@154 -- # true 00:11:31.593 06:36:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:31.593 Cannot find device "nvmf_tgt_br2" 00:11:31.593 06:36:11 -- nvmf/common.sh@155 -- # true 00:11:31.593 06:36:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:31.593 06:36:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:31.593 Cannot find device "nvmf_tgt_br" 00:11:31.593 06:36:11 -- nvmf/common.sh@157 -- # true 00:11:31.593 06:36:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:31.593 Cannot find device "nvmf_tgt_br2" 00:11:31.593 06:36:11 -- nvmf/common.sh@158 -- # true 00:11:31.593 06:36:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:31.593 06:36:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:31.593 06:36:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:31.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:31.593 06:36:11 -- nvmf/common.sh@161 -- # true 00:11:31.593 06:36:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:31.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:31.593 06:36:11 -- nvmf/common.sh@162 -- # true 00:11:31.593 06:36:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:31.851 06:36:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:31.851 06:36:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:31.851 06:36:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:31.851 06:36:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:31.851 06:36:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:31.851 06:36:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:31.851 06:36:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:31.851 06:36:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:31.851 06:36:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:31.851 06:36:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:31.851 06:36:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:31.851 06:36:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:31.851 06:36:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:31.851 06:36:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:31.851 06:36:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:31.851 06:36:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:31.851 06:36:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:31.851 06:36:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:31.851 06:36:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:31.851 06:36:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:31.851 06:36:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:31.851 06:36:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:31.851 06:36:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:31.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:11:31.851 00:11:31.851 --- 10.0.0.2 ping statistics --- 00:11:31.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.851 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:31.851 06:36:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:31.851 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:31.851 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:11:31.851 00:11:31.851 --- 10.0.0.3 ping statistics --- 00:11:31.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.851 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:31.851 06:36:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:31.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:31.851 00:11:31.851 --- 10.0.0.1 ping statistics --- 00:11:31.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.851 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:31.851 06:36:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.851 06:36:11 -- nvmf/common.sh@421 -- # return 0 00:11:31.851 06:36:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:31.851 06:36:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.851 06:36:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:31.851 06:36:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:31.851 06:36:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.851 06:36:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:31.851 06:36:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:31.851 06:36:11 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:31.851 06:36:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:31.851 06:36:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:31.851 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:11:31.851 06:36:11 -- nvmf/common.sh@469 -- # nvmfpid=76030 00:11:31.851 06:36:11 -- nvmf/common.sh@470 -- # waitforlisten 76030 00:11:31.851 06:36:11 -- common/autotest_common.sh@819 -- # '[' -z 76030 ']' 00:11:31.851 06:36:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:31.851 06:36:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.851 06:36:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:31.851 06:36:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.851 06:36:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:31.851 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:11:32.110 [2024-07-12 06:36:11.774569] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:32.110 [2024-07-12 06:36:11.774681] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:32.110 [2024-07-12 06:36:11.922359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.110 [2024-07-12 06:36:12.019844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:32.110 [2024-07-12 06:36:12.020032] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.110 [2024-07-12 06:36:12.020050] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.110 [2024-07-12 06:36:12.020061] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.110 [2024-07-12 06:36:12.020162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:32.110 [2024-07-12 06:36:12.020692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:32.110 [2024-07-12 06:36:12.020815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:32.110 [2024-07-12 06:36:12.020822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.045 06:36:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:33.045 06:36:12 -- common/autotest_common.sh@852 -- # return 0 00:11:33.045 06:36:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:33.045 06:36:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:33.045 06:36:12 -- common/autotest_common.sh@10 -- # set +x 00:11:33.045 06:36:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.045 06:36:12 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.045 06:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.045 06:36:12 -- common/autotest_common.sh@10 -- # set +x 00:11:33.045 [2024-07-12 06:36:12.774012] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.045 06:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.045 06:36:12 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:33.045 06:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.045 06:36:12 -- common/autotest_common.sh@10 -- # set +x 00:11:33.045 Malloc0 00:11:33.045 06:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.045 06:36:12 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:33.045 06:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.045 06:36:12 -- common/autotest_common.sh@10 -- # set +x 00:11:33.045 06:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.045 06:36:12 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:33.045 06:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.045 06:36:12 -- common/autotest_common.sh@10 -- # set +x 00:11:33.045 06:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.045 06:36:12 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.045 06:36:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.045 06:36:12 -- common/autotest_common.sh@10 -- # set +x 00:11:33.045 [2024-07-12 06:36:12.812378] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.045 06:36:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.045 06:36:12 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:33.045 06:36:12 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:33.045 06:36:12 -- nvmf/common.sh@520 -- # config=() 00:11:33.045 06:36:12 -- nvmf/common.sh@520 -- # local subsystem config 00:11:33.045 06:36:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:33.045 06:36:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:33.045 { 00:11:33.045 "params": { 00:11:33.045 "name": "Nvme$subsystem", 00:11:33.045 "trtype": "$TEST_TRANSPORT", 00:11:33.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:33.045 "adrfam": "ipv4", 00:11:33.045 "trsvcid": "$NVMF_PORT", 00:11:33.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:33.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:33.045 "hdgst": ${hdgst:-false}, 00:11:33.045 "ddgst": ${ddgst:-false} 00:11:33.045 }, 00:11:33.045 "method": "bdev_nvme_attach_controller" 00:11:33.045 } 00:11:33.045 EOF 00:11:33.045 )") 00:11:33.045 06:36:12 -- nvmf/common.sh@542 -- # cat 00:11:33.045 06:36:12 -- nvmf/common.sh@544 -- # jq . 00:11:33.045 06:36:12 -- nvmf/common.sh@545 -- # IFS=, 00:11:33.045 06:36:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:33.045 "params": { 00:11:33.045 "name": "Nvme1", 00:11:33.045 "trtype": "tcp", 00:11:33.045 "traddr": "10.0.0.2", 00:11:33.045 "adrfam": "ipv4", 00:11:33.045 "trsvcid": "4420", 00:11:33.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:33.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:33.045 "hdgst": false, 00:11:33.045 "ddgst": false 00:11:33.045 }, 00:11:33.045 "method": "bdev_nvme_attach_controller" 00:11:33.045 }' 00:11:33.045 [2024-07-12 06:36:12.865522] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:33.045 [2024-07-12 06:36:12.865612] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76066 ] 00:11:33.303 [2024-07-12 06:36:13.006496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:33.303 [2024-07-12 06:36:13.122871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.303 [2024-07-12 06:36:13.122993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.303 [2024-07-12 06:36:13.122979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.561 [2024-07-12 06:36:13.286273] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:33.561 [2024-07-12 06:36:13.286736] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:33.561 I/O targets: 00:11:33.561 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:33.561 00:11:33.561 00:11:33.561 CUnit - A unit testing framework for C - Version 2.1-3 00:11:33.561 http://cunit.sourceforge.net/ 00:11:33.561 00:11:33.561 00:11:33.561 Suite: bdevio tests on: Nvme1n1 00:11:33.561 Test: blockdev write read block ...passed 00:11:33.561 Test: blockdev write zeroes read block ...passed 00:11:33.561 Test: blockdev write zeroes read no split ...passed 00:11:33.561 Test: blockdev write zeroes read split ...passed 00:11:33.561 Test: blockdev write zeroes read split partial ...passed 00:11:33.561 Test: blockdev reset ...[2024-07-12 06:36:13.330682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:33.561 [2024-07-12 06:36:13.331078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c1ee0 (9): Bad file descriptor 00:11:33.561 [2024-07-12 06:36:13.344828] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:33.561 passed 00:11:33.561 Test: blockdev write read 8 blocks ...passed 00:11:33.561 Test: blockdev write read size > 128k ...passed 00:11:33.561 Test: blockdev write read invalid size ...passed 00:11:33.561 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:33.561 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:33.561 Test: blockdev write read max offset ...passed 00:11:33.561 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:33.561 Test: blockdev writev readv 8 blocks ...passed 00:11:33.561 Test: blockdev writev readv 30 x 1block ...passed 00:11:33.561 Test: blockdev writev readv block ...passed 00:11:33.561 Test: blockdev writev readv size > 128k ...passed 00:11:33.561 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:33.561 Test: blockdev comparev and writev ...[2024-07-12 06:36:13.353319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.561 [2024-07-12 06:36:13.353364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:33.561 [2024-07-12 06:36:13.353387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.561 [2024-07-12 06:36:13.353399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:33.561 [2024-07-12 06:36:13.353686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.561 [2024-07-12 06:36:13.353703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:33.561 [2024-07-12 06:36:13.353720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.561 [2024-07-12 06:36:13.353732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:33.561 [2024-07-12 06:36:13.354029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.561 [2024-07-12 06:36:13.354047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:33.561 [2024-07-12 06:36:13.354065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.561 [2024-07-12 06:36:13.354075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:33.561 [2024-07-12 06:36:13.354339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.561 [2024-07-12 06:36:13.354360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:33.561 [2024-07-12 06:36:13.354378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.561 [2024-07-12 06:36:13.354389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:33.561 passed 00:11:33.561 Test: blockdev nvme passthru rw ...passed 00:11:33.561 Test: blockdev nvme passthru vendor specific ...[2024-07-12 06:36:13.355211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.561 [2024-07-12 06:36:13.355240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:33.561 [2024-07-12 06:36:13.355351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.561 [2024-07-12 06:36:13.355367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:33.561 [2024-07-12 06:36:13.355471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.561 [2024-07-12 06:36:13.355493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:33.561 passed 00:11:33.561 Test: blockdev nvme admin passthru ...[2024-07-12 06:36:13.355609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.561 [2024-07-12 06:36:13.355631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:33.561 passed 00:11:33.561 Test: blockdev copy ...passed 00:11:33.561 00:11:33.561 Run Summary: Type Total Ran Passed Failed Inactive 00:11:33.561 suites 1 1 n/a 0 0 00:11:33.561 tests 23 23 23 0 0 00:11:33.561 asserts 152 152 152 0 n/a 00:11:33.561 00:11:33.561 Elapsed time = 0.174 seconds 00:11:33.820 06:36:13 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.820 06:36:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.820 06:36:13 -- common/autotest_common.sh@10 -- # set +x 00:11:33.820 06:36:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.820 06:36:13 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:33.820 06:36:13 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:33.820 06:36:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:33.820 06:36:13 -- nvmf/common.sh@116 -- # sync 00:11:33.820 06:36:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:33.820 06:36:13 -- nvmf/common.sh@119 -- # set +e 00:11:33.820 06:36:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:33.820 06:36:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:33.820 rmmod nvme_tcp 00:11:33.820 rmmod nvme_fabrics 00:11:33.820 rmmod nvme_keyring 00:11:34.078 06:36:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:34.078 06:36:13 -- nvmf/common.sh@123 -- # set -e 00:11:34.078 06:36:13 -- nvmf/common.sh@124 -- # return 0 00:11:34.078 06:36:13 -- nvmf/common.sh@477 -- # '[' -n 76030 ']' 00:11:34.078 06:36:13 -- nvmf/common.sh@478 -- # killprocess 76030 00:11:34.078 06:36:13 -- common/autotest_common.sh@926 -- # '[' -z 76030 ']' 00:11:34.078 06:36:13 -- common/autotest_common.sh@930 -- # kill -0 76030 00:11:34.078 06:36:13 -- common/autotest_common.sh@931 -- # uname 00:11:34.078 06:36:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:34.078 06:36:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76030 00:11:34.078 06:36:13 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:11:34.078 killing process with pid 76030 00:11:34.078 06:36:13 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:11:34.078 06:36:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76030' 00:11:34.078 06:36:13 -- common/autotest_common.sh@945 -- # kill 76030 00:11:34.078 06:36:13 -- common/autotest_common.sh@950 -- # wait 76030 00:11:34.336 06:36:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:34.336 06:36:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:34.336 06:36:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:34.336 06:36:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.336 06:36:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:34.336 06:36:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.336 06:36:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.336 06:36:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.336 06:36:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:34.336 00:11:34.336 real 0m2.894s 00:11:34.336 user 0m9.484s 00:11:34.337 sys 0m1.080s 00:11:34.337 ************************************ 00:11:34.337 END TEST nvmf_bdevio_no_huge 00:11:34.337 ************************************ 00:11:34.337 06:36:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.337 06:36:14 -- common/autotest_common.sh@10 -- # set +x 00:11:34.337 06:36:14 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:34.337 06:36:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:34.337 06:36:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:34.337 06:36:14 -- common/autotest_common.sh@10 -- # set +x 00:11:34.337 ************************************ 00:11:34.337 START TEST nvmf_tls 00:11:34.337 ************************************ 00:11:34.337 06:36:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:34.595 * Looking for test storage... 00:11:34.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:34.595 06:36:14 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:34.595 06:36:14 -- nvmf/common.sh@7 -- # uname -s 00:11:34.595 06:36:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.595 06:36:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.595 06:36:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.595 06:36:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.595 06:36:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.595 06:36:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.595 06:36:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.595 06:36:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.595 06:36:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.595 06:36:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.595 06:36:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:11:34.595 06:36:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:11:34.595 06:36:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.595 06:36:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.595 06:36:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:34.595 06:36:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:34.595 06:36:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.595 06:36:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.595 06:36:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.595 06:36:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.595 06:36:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.596 06:36:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.596 06:36:14 -- paths/export.sh@5 -- # export PATH 00:11:34.596 06:36:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.596 06:36:14 -- nvmf/common.sh@46 -- # : 0 00:11:34.596 06:36:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:34.596 06:36:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:34.596 06:36:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:34.596 06:36:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.596 06:36:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.596 06:36:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:34.596 06:36:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:34.596 06:36:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:34.596 06:36:14 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.596 06:36:14 -- target/tls.sh@71 -- # nvmftestinit 00:11:34.596 06:36:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:34.596 06:36:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.596 06:36:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:34.596 06:36:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:34.596 06:36:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:34.596 06:36:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.596 06:36:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.596 06:36:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.596 06:36:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:34.596 06:36:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:34.596 06:36:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:34.596 06:36:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:34.596 06:36:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:34.596 06:36:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:34.596 06:36:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.596 06:36:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.596 06:36:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:34.596 06:36:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:34.596 06:36:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:34.596 06:36:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:34.596 06:36:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:34.596 06:36:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.596 06:36:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:34.596 06:36:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:34.596 06:36:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:34.596 06:36:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:34.596 06:36:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:34.596 06:36:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:34.596 Cannot find device "nvmf_tgt_br" 00:11:34.596 06:36:14 -- nvmf/common.sh@154 -- # true 00:11:34.596 06:36:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:34.596 Cannot find device "nvmf_tgt_br2" 00:11:34.596 06:36:14 -- nvmf/common.sh@155 -- # true 00:11:34.596 06:36:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:34.596 06:36:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:34.596 Cannot find device "nvmf_tgt_br" 00:11:34.596 06:36:14 -- nvmf/common.sh@157 -- # true 00:11:34.596 06:36:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:34.596 Cannot find device "nvmf_tgt_br2" 00:11:34.596 06:36:14 -- nvmf/common.sh@158 -- # true 00:11:34.596 06:36:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:34.596 06:36:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:34.596 06:36:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:34.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.596 06:36:14 -- nvmf/common.sh@161 -- # true 00:11:34.596 06:36:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:34.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.596 06:36:14 -- nvmf/common.sh@162 -- # true 00:11:34.596 06:36:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:34.596 06:36:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:34.596 06:36:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:34.596 06:36:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:34.596 06:36:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:34.596 06:36:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:34.596 06:36:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:34.596 06:36:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:34.596 06:36:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:34.854 06:36:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:34.854 06:36:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:34.854 06:36:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:34.854 06:36:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:34.854 06:36:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:34.855 06:36:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:34.855 06:36:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:34.855 06:36:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:34.855 06:36:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:34.855 06:36:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:34.855 06:36:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:34.855 06:36:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:34.855 06:36:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:34.855 06:36:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:34.855 06:36:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:34.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:11:34.855 00:11:34.855 --- 10.0.0.2 ping statistics --- 00:11:34.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.855 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:34.855 06:36:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:34.855 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:34.855 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:11:34.855 00:11:34.855 --- 10.0.0.3 ping statistics --- 00:11:34.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.855 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:34.855 06:36:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:34.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:34.855 00:11:34.855 --- 10.0.0.1 ping statistics --- 00:11:34.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.855 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:34.855 06:36:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.855 06:36:14 -- nvmf/common.sh@421 -- # return 0 00:11:34.855 06:36:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:34.855 06:36:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.855 06:36:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:34.855 06:36:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:34.855 06:36:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.855 06:36:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:34.855 06:36:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:34.855 06:36:14 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:34.855 06:36:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:34.855 06:36:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:34.855 06:36:14 -- common/autotest_common.sh@10 -- # set +x 00:11:34.855 06:36:14 -- nvmf/common.sh@469 -- # nvmfpid=76243 00:11:34.855 06:36:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:34.855 06:36:14 -- nvmf/common.sh@470 -- # waitforlisten 76243 00:11:34.855 06:36:14 -- common/autotest_common.sh@819 -- # '[' -z 76243 ']' 00:11:34.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.855 06:36:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.855 06:36:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:34.855 06:36:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.855 06:36:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:34.855 06:36:14 -- common/autotest_common.sh@10 -- # set +x 00:11:34.855 [2024-07-12 06:36:14.705426] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:34.855 [2024-07-12 06:36:14.705518] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.113 [2024-07-12 06:36:14.851702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.113 [2024-07-12 06:36:14.893222] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:35.113 [2024-07-12 06:36:14.893389] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.113 [2024-07-12 06:36:14.893406] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.113 [2024-07-12 06:36:14.893416] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.113 [2024-07-12 06:36:14.893447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.047 06:36:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:36.047 06:36:15 -- common/autotest_common.sh@852 -- # return 0 00:11:36.047 06:36:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:36.047 06:36:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:36.047 06:36:15 -- common/autotest_common.sh@10 -- # set +x 00:11:36.047 06:36:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.047 06:36:15 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:11:36.047 06:36:15 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:36.047 true 00:11:36.305 06:36:15 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:36.305 06:36:15 -- target/tls.sh@82 -- # jq -r .tls_version 00:11:36.563 06:36:16 -- target/tls.sh@82 -- # version=0 00:11:36.563 06:36:16 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:11:36.563 06:36:16 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:36.820 06:36:16 -- target/tls.sh@90 -- # jq -r .tls_version 00:11:36.820 06:36:16 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:37.077 06:36:16 -- target/tls.sh@90 -- # version=13 00:11:37.077 06:36:16 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:11:37.077 06:36:16 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:37.336 06:36:17 -- target/tls.sh@98 -- # jq -r .tls_version 00:11:37.336 06:36:17 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:37.604 06:36:17 -- target/tls.sh@98 -- # version=7 00:11:37.604 06:36:17 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:11:37.604 06:36:17 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:37.604 06:36:17 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:11:37.891 06:36:17 -- target/tls.sh@105 -- # ktls=false 00:11:37.891 06:36:17 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:11:37.891 06:36:17 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:38.149 06:36:17 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:38.149 06:36:17 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:11:38.409 06:36:18 -- target/tls.sh@113 -- # ktls=true 00:11:38.409 06:36:18 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:11:38.409 06:36:18 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:38.666 06:36:18 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:38.666 06:36:18 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:11:38.924 06:36:18 -- target/tls.sh@121 -- # ktls=false 00:11:38.925 06:36:18 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:11:38.925 06:36:18 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:11:38.925 06:36:18 -- target/tls.sh@49 -- # local key hash crc 00:11:38.925 06:36:18 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:11:38.925 06:36:18 -- target/tls.sh@51 -- # hash=01 00:11:38.925 06:36:18 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:11:38.925 06:36:18 -- target/tls.sh@52 -- # gzip -1 -c 00:11:38.925 06:36:18 -- target/tls.sh@52 -- # tail -c8 00:11:38.925 06:36:18 -- target/tls.sh@52 -- # head -c 4 00:11:38.925 06:36:18 -- target/tls.sh@52 -- # crc='p$H�' 00:11:38.925 06:36:18 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:38.925 06:36:18 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:11:38.925 06:36:18 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:38.925 06:36:18 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:38.925 06:36:18 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:11:38.925 06:36:18 -- target/tls.sh@49 -- # local key hash crc 00:11:38.925 06:36:18 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:11:38.925 06:36:18 -- target/tls.sh@51 -- # hash=01 00:11:38.925 06:36:18 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:11:38.925 06:36:18 -- target/tls.sh@52 -- # gzip -1 -c 00:11:38.925 06:36:18 -- target/tls.sh@52 -- # tail -c8 00:11:38.925 06:36:18 -- target/tls.sh@52 -- # head -c 4 00:11:38.925 06:36:18 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:11:38.925 06:36:18 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:38.925 06:36:18 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:11:38.925 06:36:18 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:38.925 06:36:18 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:38.925 06:36:18 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:38.925 06:36:18 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:38.925 06:36:18 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:38.925 06:36:18 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:38.925 06:36:18 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:38.925 06:36:18 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:38.925 06:36:18 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:39.183 06:36:19 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:39.749 06:36:19 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:39.749 06:36:19 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:39.749 06:36:19 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:40.007 [2024-07-12 06:36:19.759681] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.007 06:36:19 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:40.265 06:36:20 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:40.523 [2024-07-12 06:36:20.419957] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:40.523 [2024-07-12 06:36:20.420251] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.781 06:36:20 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:41.039 malloc0 00:11:41.039 06:36:20 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:41.297 06:36:21 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:41.556 06:36:21 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:53.753 Initializing NVMe Controllers 00:11:53.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:53.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:53.753 Initialization complete. Launching workers. 00:11:53.753 ======================================================== 00:11:53.753 Latency(us) 00:11:53.753 Device Information : IOPS MiB/s Average min max 00:11:53.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9350.50 36.53 6846.26 1290.25 10048.39 00:11:53.753 ======================================================== 00:11:53.753 Total : 9350.50 36.53 6846.26 1290.25 10048.39 00:11:53.753 00:11:53.753 06:36:31 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:53.753 06:36:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:53.753 06:36:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:53.753 06:36:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:53.753 06:36:31 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:53.753 06:36:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:53.753 06:36:31 -- target/tls.sh@28 -- # bdevperf_pid=76499 00:11:53.753 06:36:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:53.753 06:36:31 -- target/tls.sh@31 -- # waitforlisten 76499 /var/tmp/bdevperf.sock 00:11:53.753 06:36:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:53.753 06:36:31 -- common/autotest_common.sh@819 -- # '[' -z 76499 ']' 00:11:53.753 06:36:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:53.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:53.753 06:36:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:53.753 06:36:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:53.753 06:36:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:53.753 06:36:31 -- common/autotest_common.sh@10 -- # set +x 00:11:53.753 [2024-07-12 06:36:31.634909] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:53.753 [2024-07-12 06:36:31.635063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76499 ] 00:11:53.753 [2024-07-12 06:36:31.774187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.753 [2024-07-12 06:36:31.815318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.753 06:36:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:53.753 06:36:32 -- common/autotest_common.sh@852 -- # return 0 00:11:53.753 06:36:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:53.753 [2024-07-12 06:36:32.838337] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:53.753 TLSTESTn1 00:11:53.753 06:36:32 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:53.753 Running I/O for 10 seconds... 00:12:03.804 00:12:03.804 Latency(us) 00:12:03.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.804 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:03.804 Verification LBA range: start 0x0 length 0x2000 00:12:03.804 TLSTESTn1 : 10.02 5192.57 20.28 0.00 0.00 24608.42 5570.56 29312.47 00:12:03.804 =================================================================================================================== 00:12:03.804 Total : 5192.57 20.28 0.00 0.00 24608.42 5570.56 29312.47 00:12:03.804 0 00:12:03.804 06:36:43 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:03.804 06:36:43 -- target/tls.sh@45 -- # killprocess 76499 00:12:03.804 06:36:43 -- common/autotest_common.sh@926 -- # '[' -z 76499 ']' 00:12:03.805 06:36:43 -- common/autotest_common.sh@930 -- # kill -0 76499 00:12:03.805 06:36:43 -- common/autotest_common.sh@931 -- # uname 00:12:03.805 06:36:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:03.805 06:36:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76499 00:12:03.805 killing process with pid 76499 00:12:03.805 Received shutdown signal, test time was about 10.000000 seconds 00:12:03.805 00:12:03.805 Latency(us) 00:12:03.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.805 =================================================================================================================== 00:12:03.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:03.805 06:36:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:03.805 06:36:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:03.805 06:36:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76499' 00:12:03.805 06:36:43 -- common/autotest_common.sh@945 -- # kill 76499 00:12:03.805 06:36:43 -- common/autotest_common.sh@950 -- # wait 76499 00:12:03.805 06:36:43 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:03.805 06:36:43 -- common/autotest_common.sh@640 -- # local es=0 00:12:03.805 06:36:43 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:03.805 06:36:43 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:12:03.805 06:36:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:03.805 06:36:43 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:12:03.805 06:36:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:03.805 06:36:43 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:03.805 06:36:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:03.805 06:36:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:03.805 06:36:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:03.805 06:36:43 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:12:03.805 06:36:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:03.805 06:36:43 -- target/tls.sh@28 -- # bdevperf_pid=76630 00:12:03.805 06:36:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:03.805 06:36:43 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:03.805 06:36:43 -- target/tls.sh@31 -- # waitforlisten 76630 /var/tmp/bdevperf.sock 00:12:03.805 06:36:43 -- common/autotest_common.sh@819 -- # '[' -z 76630 ']' 00:12:03.805 06:36:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:03.805 06:36:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:03.805 06:36:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:03.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:03.805 06:36:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:03.805 06:36:43 -- common/autotest_common.sh@10 -- # set +x 00:12:03.805 [2024-07-12 06:36:43.299453] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:03.805 [2024-07-12 06:36:43.300495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76630 ] 00:12:03.805 [2024-07-12 06:36:43.440383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.805 [2024-07-12 06:36:43.475731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.738 06:36:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:04.738 06:36:44 -- common/autotest_common.sh@852 -- # return 0 00:12:04.738 06:36:44 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:04.738 [2024-07-12 06:36:44.501346] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:04.738 [2024-07-12 06:36:44.513232] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:04.738 [2024-07-12 06:36:44.513324] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195d4f0 (107): Transport endpoint is not connected 00:12:04.738 [2024-07-12 06:36:44.514312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195d4f0 (9): Bad file descriptor 00:12:04.738 [2024-07-12 06:36:44.515310] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:04.738 [2024-07-12 06:36:44.515336] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:04.738 [2024-07-12 06:36:44.515347] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:04.738 request: 00:12:04.738 { 00:12:04.738 "name": "TLSTEST", 00:12:04.738 "trtype": "tcp", 00:12:04.738 "traddr": "10.0.0.2", 00:12:04.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:04.738 "adrfam": "ipv4", 00:12:04.738 "trsvcid": "4420", 00:12:04.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.738 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:12:04.738 "method": "bdev_nvme_attach_controller", 00:12:04.738 "req_id": 1 00:12:04.738 } 00:12:04.738 Got JSON-RPC error response 00:12:04.738 response: 00:12:04.738 { 00:12:04.738 "code": -32602, 00:12:04.738 "message": "Invalid parameters" 00:12:04.739 } 00:12:04.739 06:36:44 -- target/tls.sh@36 -- # killprocess 76630 00:12:04.739 06:36:44 -- common/autotest_common.sh@926 -- # '[' -z 76630 ']' 00:12:04.739 06:36:44 -- common/autotest_common.sh@930 -- # kill -0 76630 00:12:04.739 06:36:44 -- common/autotest_common.sh@931 -- # uname 00:12:04.739 06:36:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:04.739 06:36:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76630 00:12:04.739 killing process with pid 76630 00:12:04.739 Received shutdown signal, test time was about 10.000000 seconds 00:12:04.739 00:12:04.739 Latency(us) 00:12:04.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.739 =================================================================================================================== 00:12:04.739 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:04.739 06:36:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:04.739 06:36:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:04.739 06:36:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76630' 00:12:04.739 06:36:44 -- common/autotest_common.sh@945 -- # kill 76630 00:12:04.739 06:36:44 -- common/autotest_common.sh@950 -- # wait 76630 00:12:04.998 06:36:44 -- target/tls.sh@37 -- # return 1 00:12:04.998 06:36:44 -- common/autotest_common.sh@643 -- # es=1 00:12:04.998 06:36:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:04.998 06:36:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:04.998 06:36:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:04.998 06:36:44 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:04.999 06:36:44 -- common/autotest_common.sh@640 -- # local es=0 00:12:04.999 06:36:44 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:04.999 06:36:44 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:12:04.999 06:36:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:04.999 06:36:44 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:12:04.999 06:36:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:04.999 06:36:44 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:04.999 06:36:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:04.999 06:36:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:04.999 06:36:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:04.999 06:36:44 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:04.999 06:36:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:04.999 06:36:44 -- target/tls.sh@28 -- # bdevperf_pid=76657 00:12:04.999 06:36:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:04.999 06:36:44 -- target/tls.sh@31 -- # waitforlisten 76657 /var/tmp/bdevperf.sock 00:12:04.999 06:36:44 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:04.999 06:36:44 -- common/autotest_common.sh@819 -- # '[' -z 76657 ']' 00:12:04.999 06:36:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:04.999 06:36:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:04.999 06:36:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:04.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:04.999 06:36:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:04.999 06:36:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.999 [2024-07-12 06:36:44.779287] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:04.999 [2024-07-12 06:36:44.779681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76657 ] 00:12:05.257 [2024-07-12 06:36:44.924742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.257 [2024-07-12 06:36:44.960191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.193 06:36:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:06.193 06:36:45 -- common/autotest_common.sh@852 -- # return 0 00:12:06.193 06:36:45 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:06.193 [2024-07-12 06:36:46.085990] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:06.193 [2024-07-12 06:36:46.093405] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:06.193 [2024-07-12 06:36:46.093445] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:06.193 [2024-07-12 06:36:46.093497] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:06.193 [2024-07-12 06:36:46.093847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2f4f0 (107): Transport endpoint is not connected 00:12:06.193 [2024-07-12 06:36:46.094838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2f4f0 (9): Bad file descriptor 00:12:06.193 [2024-07-12 06:36:46.095834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:06.193 [2024-07-12 06:36:46.095854] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:06.193 [2024-07-12 06:36:46.095864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:06.193 request: 00:12:06.193 { 00:12:06.193 "name": "TLSTEST", 00:12:06.193 "trtype": "tcp", 00:12:06.193 "traddr": "10.0.0.2", 00:12:06.193 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:06.193 "adrfam": "ipv4", 00:12:06.193 "trsvcid": "4420", 00:12:06.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.193 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:06.193 "method": "bdev_nvme_attach_controller", 00:12:06.193 "req_id": 1 00:12:06.193 } 00:12:06.193 Got JSON-RPC error response 00:12:06.193 response: 00:12:06.193 { 00:12:06.193 "code": -32602, 00:12:06.194 "message": "Invalid parameters" 00:12:06.194 } 00:12:06.452 06:36:46 -- target/tls.sh@36 -- # killprocess 76657 00:12:06.452 06:36:46 -- common/autotest_common.sh@926 -- # '[' -z 76657 ']' 00:12:06.452 06:36:46 -- common/autotest_common.sh@930 -- # kill -0 76657 00:12:06.452 06:36:46 -- common/autotest_common.sh@931 -- # uname 00:12:06.452 06:36:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:06.452 06:36:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76657 00:12:06.452 killing process with pid 76657 00:12:06.452 Received shutdown signal, test time was about 10.000000 seconds 00:12:06.452 00:12:06.452 Latency(us) 00:12:06.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.452 =================================================================================================================== 00:12:06.452 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:06.452 06:36:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:06.452 06:36:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:06.452 06:36:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76657' 00:12:06.452 06:36:46 -- common/autotest_common.sh@945 -- # kill 76657 00:12:06.452 06:36:46 -- common/autotest_common.sh@950 -- # wait 76657 00:12:06.452 06:36:46 -- target/tls.sh@37 -- # return 1 00:12:06.452 06:36:46 -- common/autotest_common.sh@643 -- # es=1 00:12:06.452 06:36:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:06.452 06:36:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:06.452 06:36:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:06.452 06:36:46 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:06.452 06:36:46 -- common/autotest_common.sh@640 -- # local es=0 00:12:06.452 06:36:46 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:06.452 06:36:46 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:12:06.452 06:36:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:06.452 06:36:46 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:12:06.452 06:36:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:06.452 06:36:46 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:06.452 06:36:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:06.452 06:36:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:06.452 06:36:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:06.452 06:36:46 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:06.452 06:36:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:06.452 06:36:46 -- target/tls.sh@28 -- # bdevperf_pid=76685 00:12:06.452 06:36:46 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:06.452 06:36:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:06.452 06:36:46 -- target/tls.sh@31 -- # waitforlisten 76685 /var/tmp/bdevperf.sock 00:12:06.452 06:36:46 -- common/autotest_common.sh@819 -- # '[' -z 76685 ']' 00:12:06.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:06.452 06:36:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:06.452 06:36:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:06.452 06:36:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:06.452 06:36:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:06.452 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:12:06.452 [2024-07-12 06:36:46.328715] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:06.452 [2024-07-12 06:36:46.328783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76685 ] 00:12:06.710 [2024-07-12 06:36:46.466379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.710 [2024-07-12 06:36:46.502520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.646 06:36:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:07.646 06:36:47 -- common/autotest_common.sh@852 -- # return 0 00:12:07.646 06:36:47 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:07.905 [2024-07-12 06:36:47.597107] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:07.905 [2024-07-12 06:36:47.603746] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:07.905 [2024-07-12 06:36:47.603789] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:07.905 [2024-07-12 06:36:47.603844] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:07.905 [2024-07-12 06:36:47.604058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe954f0 (107): Transport endpoint is not connected 00:12:07.905 [2024-07-12 06:36:47.605048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe954f0 (9): Bad file descriptor 00:12:07.905 [2024-07-12 06:36:47.606045] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:07.905 [2024-07-12 06:36:47.606068] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:07.905 [2024-07-12 06:36:47.606079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:07.905 request: 00:12:07.905 { 00:12:07.905 "name": "TLSTEST", 00:12:07.905 "trtype": "tcp", 00:12:07.905 "traddr": "10.0.0.2", 00:12:07.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:07.905 "adrfam": "ipv4", 00:12:07.905 "trsvcid": "4420", 00:12:07.905 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:07.905 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:07.906 "method": "bdev_nvme_attach_controller", 00:12:07.906 "req_id": 1 00:12:07.906 } 00:12:07.906 Got JSON-RPC error response 00:12:07.906 response: 00:12:07.906 { 00:12:07.906 "code": -32602, 00:12:07.906 "message": "Invalid parameters" 00:12:07.906 } 00:12:07.906 06:36:47 -- target/tls.sh@36 -- # killprocess 76685 00:12:07.906 06:36:47 -- common/autotest_common.sh@926 -- # '[' -z 76685 ']' 00:12:07.906 06:36:47 -- common/autotest_common.sh@930 -- # kill -0 76685 00:12:07.906 06:36:47 -- common/autotest_common.sh@931 -- # uname 00:12:07.906 06:36:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:07.906 06:36:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76685 00:12:07.906 killing process with pid 76685 00:12:07.906 Received shutdown signal, test time was about 10.000000 seconds 00:12:07.906 00:12:07.906 Latency(us) 00:12:07.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.906 =================================================================================================================== 00:12:07.906 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:07.906 06:36:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:07.906 06:36:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:07.906 06:36:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76685' 00:12:07.906 06:36:47 -- common/autotest_common.sh@945 -- # kill 76685 00:12:07.906 06:36:47 -- common/autotest_common.sh@950 -- # wait 76685 00:12:07.906 06:36:47 -- target/tls.sh@37 -- # return 1 00:12:07.906 06:36:47 -- common/autotest_common.sh@643 -- # es=1 00:12:07.906 06:36:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:07.906 06:36:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:07.906 06:36:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:07.906 06:36:47 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:07.906 06:36:47 -- common/autotest_common.sh@640 -- # local es=0 00:12:07.906 06:36:47 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:07.906 06:36:47 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:12:07.906 06:36:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:07.906 06:36:47 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:12:07.906 06:36:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:07.906 06:36:47 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:07.906 06:36:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:07.906 06:36:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:07.906 06:36:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:07.906 06:36:47 -- target/tls.sh@23 -- # psk= 00:12:07.906 06:36:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:07.906 06:36:47 -- target/tls.sh@28 -- # bdevperf_pid=76711 00:12:07.906 06:36:47 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:07.906 06:36:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:07.906 06:36:47 -- target/tls.sh@31 -- # waitforlisten 76711 /var/tmp/bdevperf.sock 00:12:07.906 06:36:47 -- common/autotest_common.sh@819 -- # '[' -z 76711 ']' 00:12:07.906 06:36:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:07.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:07.906 06:36:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:07.906 06:36:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:07.906 06:36:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:07.906 06:36:47 -- common/autotest_common.sh@10 -- # set +x 00:12:08.173 [2024-07-12 06:36:47.869849] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:08.173 [2024-07-12 06:36:47.869978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76711 ] 00:12:08.174 [2024-07-12 06:36:48.012414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.174 [2024-07-12 06:36:48.050877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.162 06:36:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:09.162 06:36:48 -- common/autotest_common.sh@852 -- # return 0 00:12:09.162 06:36:48 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:09.422 [2024-07-12 06:36:49.154130] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:09.422 [2024-07-12 06:36:49.156074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7233a0 (9): Bad file descriptor 00:12:09.422 [2024-07-12 06:36:49.157066] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:09.422 [2024-07-12 06:36:49.157247] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:09.422 [2024-07-12 06:36:49.157354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:09.422 request: 00:12:09.422 { 00:12:09.422 "name": "TLSTEST", 00:12:09.422 "trtype": "tcp", 00:12:09.422 "traddr": "10.0.0.2", 00:12:09.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.422 "adrfam": "ipv4", 00:12:09.422 "trsvcid": "4420", 00:12:09.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.422 "method": "bdev_nvme_attach_controller", 00:12:09.422 "req_id": 1 00:12:09.422 } 00:12:09.422 Got JSON-RPC error response 00:12:09.422 response: 00:12:09.422 { 00:12:09.422 "code": -32602, 00:12:09.422 "message": "Invalid parameters" 00:12:09.422 } 00:12:09.422 06:36:49 -- target/tls.sh@36 -- # killprocess 76711 00:12:09.422 06:36:49 -- common/autotest_common.sh@926 -- # '[' -z 76711 ']' 00:12:09.422 06:36:49 -- common/autotest_common.sh@930 -- # kill -0 76711 00:12:09.422 06:36:49 -- common/autotest_common.sh@931 -- # uname 00:12:09.422 06:36:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:09.422 06:36:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76711 00:12:09.422 killing process with pid 76711 00:12:09.422 Received shutdown signal, test time was about 10.000000 seconds 00:12:09.422 00:12:09.422 Latency(us) 00:12:09.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.422 =================================================================================================================== 00:12:09.422 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:09.422 06:36:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:09.422 06:36:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:09.422 06:36:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76711' 00:12:09.422 06:36:49 -- common/autotest_common.sh@945 -- # kill 76711 00:12:09.422 06:36:49 -- common/autotest_common.sh@950 -- # wait 76711 00:12:09.682 06:36:49 -- target/tls.sh@37 -- # return 1 00:12:09.682 06:36:49 -- common/autotest_common.sh@643 -- # es=1 00:12:09.682 06:36:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:09.682 06:36:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:09.682 06:36:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:09.682 06:36:49 -- target/tls.sh@167 -- # killprocess 76243 00:12:09.682 06:36:49 -- common/autotest_common.sh@926 -- # '[' -z 76243 ']' 00:12:09.682 06:36:49 -- common/autotest_common.sh@930 -- # kill -0 76243 00:12:09.682 06:36:49 -- common/autotest_common.sh@931 -- # uname 00:12:09.682 06:36:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:09.682 06:36:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76243 00:12:09.682 killing process with pid 76243 00:12:09.682 06:36:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:09.682 06:36:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:09.682 06:36:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76243' 00:12:09.682 06:36:49 -- common/autotest_common.sh@945 -- # kill 76243 00:12:09.682 06:36:49 -- common/autotest_common.sh@950 -- # wait 76243 00:12:09.682 06:36:49 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:12:09.682 06:36:49 -- target/tls.sh@49 -- # local key hash crc 00:12:09.682 06:36:49 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:09.682 06:36:49 -- target/tls.sh@51 -- # hash=02 00:12:09.682 06:36:49 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:12:09.682 06:36:49 -- target/tls.sh@52 -- # gzip -1 -c 00:12:09.682 06:36:49 -- target/tls.sh@52 -- # head -c 4 00:12:09.682 06:36:49 -- target/tls.sh@52 -- # tail -c8 00:12:09.682 06:36:49 -- target/tls.sh@52 -- # crc='�e�'\''' 00:12:09.682 06:36:49 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:12:09.682 06:36:49 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:09.682 06:36:49 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:09.682 06:36:49 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:09.682 06:36:49 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:09.682 06:36:49 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:09.682 06:36:49 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:09.682 06:36:49 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:12:09.682 06:36:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:09.682 06:36:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:09.682 06:36:49 -- common/autotest_common.sh@10 -- # set +x 00:12:09.682 06:36:49 -- nvmf/common.sh@469 -- # nvmfpid=76755 00:12:09.682 06:36:49 -- nvmf/common.sh@470 -- # waitforlisten 76755 00:12:09.682 06:36:49 -- common/autotest_common.sh@819 -- # '[' -z 76755 ']' 00:12:09.682 06:36:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:09.682 06:36:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.682 06:36:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:09.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.682 06:36:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.682 06:36:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:09.682 06:36:49 -- common/autotest_common.sh@10 -- # set +x 00:12:09.941 [2024-07-12 06:36:49.608681] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:09.941 [2024-07-12 06:36:49.608782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.941 [2024-07-12 06:36:49.743019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.941 [2024-07-12 06:36:49.779532] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:09.941 [2024-07-12 06:36:49.779677] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.941 [2024-07-12 06:36:49.779690] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.941 [2024-07-12 06:36:49.779698] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.941 [2024-07-12 06:36:49.779722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.877 06:36:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:10.877 06:36:50 -- common/autotest_common.sh@852 -- # return 0 00:12:10.877 06:36:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:10.877 06:36:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:10.877 06:36:50 -- common/autotest_common.sh@10 -- # set +x 00:12:10.877 06:36:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.877 06:36:50 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:10.877 06:36:50 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:10.877 06:36:50 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:11.136 [2024-07-12 06:36:50.890119] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.136 06:36:50 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:11.393 06:36:51 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:11.652 [2024-07-12 06:36:51.362278] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:11.652 [2024-07-12 06:36:51.362552] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.652 06:36:51 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:11.910 malloc0 00:12:11.910 06:36:51 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:11.910 06:36:51 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:12.169 06:36:51 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:12.169 06:36:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:12.169 06:36:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:12.169 06:36:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:12.169 06:36:51 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:12.169 06:36:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:12.169 06:36:51 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:12.169 06:36:51 -- target/tls.sh@28 -- # bdevperf_pid=76810 00:12:12.169 06:36:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:12.169 06:36:52 -- target/tls.sh@31 -- # waitforlisten 76810 /var/tmp/bdevperf.sock 00:12:12.169 06:36:52 -- common/autotest_common.sh@819 -- # '[' -z 76810 ']' 00:12:12.169 06:36:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:12.169 06:36:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:12.169 06:36:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:12.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:12.169 06:36:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:12.169 06:36:52 -- common/autotest_common.sh@10 -- # set +x 00:12:12.169 [2024-07-12 06:36:52.034704] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:12.169 [2024-07-12 06:36:52.034999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76810 ] 00:12:12.428 [2024-07-12 06:36:52.173337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.428 [2024-07-12 06:36:52.212923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.361 06:36:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:13.361 06:36:52 -- common/autotest_common.sh@852 -- # return 0 00:12:13.361 06:36:52 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:13.361 [2024-07-12 06:36:53.162738] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:13.361 TLSTESTn1 00:12:13.361 06:36:53 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:13.619 Running I/O for 10 seconds... 00:12:23.604 00:12:23.604 Latency(us) 00:12:23.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.604 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:23.605 Verification LBA range: start 0x0 length 0x2000 00:12:23.605 TLSTESTn1 : 10.01 5955.22 23.26 0.00 0.00 21459.58 4796.04 30146.56 00:12:23.605 =================================================================================================================== 00:12:23.605 Total : 5955.22 23.26 0.00 0.00 21459.58 4796.04 30146.56 00:12:23.605 0 00:12:23.605 06:37:03 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:23.605 06:37:03 -- target/tls.sh@45 -- # killprocess 76810 00:12:23.605 06:37:03 -- common/autotest_common.sh@926 -- # '[' -z 76810 ']' 00:12:23.605 06:37:03 -- common/autotest_common.sh@930 -- # kill -0 76810 00:12:23.605 06:37:03 -- common/autotest_common.sh@931 -- # uname 00:12:23.605 06:37:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:23.605 06:37:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76810 00:12:23.605 06:37:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:23.605 06:37:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:23.605 06:37:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76810' 00:12:23.605 killing process with pid 76810 00:12:23.605 06:37:03 -- common/autotest_common.sh@945 -- # kill 76810 00:12:23.605 Received shutdown signal, test time was about 10.000000 seconds 00:12:23.605 00:12:23.605 Latency(us) 00:12:23.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.605 =================================================================================================================== 00:12:23.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:23.605 06:37:03 -- common/autotest_common.sh@950 -- # wait 76810 00:12:23.864 06:37:03 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:23.864 06:37:03 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:23.864 06:37:03 -- common/autotest_common.sh@640 -- # local es=0 00:12:23.864 06:37:03 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:23.864 06:37:03 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:12:23.864 06:37:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:23.864 06:37:03 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:12:23.864 06:37:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:23.864 06:37:03 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:23.864 06:37:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:23.864 06:37:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:23.864 06:37:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:23.864 06:37:03 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:23.864 06:37:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:23.864 06:37:03 -- target/tls.sh@28 -- # bdevperf_pid=76944 00:12:23.864 06:37:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:23.864 06:37:03 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:23.864 06:37:03 -- target/tls.sh@31 -- # waitforlisten 76944 /var/tmp/bdevperf.sock 00:12:23.864 06:37:03 -- common/autotest_common.sh@819 -- # '[' -z 76944 ']' 00:12:23.864 06:37:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:23.864 06:37:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:23.864 06:37:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:23.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:23.864 06:37:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:23.864 06:37:03 -- common/autotest_common.sh@10 -- # set +x 00:12:23.864 [2024-07-12 06:37:03.616677] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:23.864 [2024-07-12 06:37:03.617020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76944 ] 00:12:23.864 [2024-07-12 06:37:03.754078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.122 [2024-07-12 06:37:03.786790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.689 06:37:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:24.689 06:37:04 -- common/autotest_common.sh@852 -- # return 0 00:12:24.689 06:37:04 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:24.947 [2024-07-12 06:37:04.742146] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:24.947 [2024-07-12 06:37:04.742198] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:24.947 request: 00:12:24.947 { 00:12:24.947 "name": "TLSTEST", 00:12:24.947 "trtype": "tcp", 00:12:24.947 "traddr": "10.0.0.2", 00:12:24.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:24.947 "adrfam": "ipv4", 00:12:24.947 "trsvcid": "4420", 00:12:24.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.947 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:24.947 "method": "bdev_nvme_attach_controller", 00:12:24.947 "req_id": 1 00:12:24.947 } 00:12:24.947 Got JSON-RPC error response 00:12:24.947 response: 00:12:24.947 { 00:12:24.947 "code": -22, 00:12:24.947 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:24.947 } 00:12:24.947 06:37:04 -- target/tls.sh@36 -- # killprocess 76944 00:12:24.947 06:37:04 -- common/autotest_common.sh@926 -- # '[' -z 76944 ']' 00:12:24.947 06:37:04 -- common/autotest_common.sh@930 -- # kill -0 76944 00:12:24.947 06:37:04 -- common/autotest_common.sh@931 -- # uname 00:12:24.947 06:37:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:24.947 06:37:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76944 00:12:24.947 killing process with pid 76944 00:12:24.947 Received shutdown signal, test time was about 10.000000 seconds 00:12:24.947 00:12:24.947 Latency(us) 00:12:24.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.947 =================================================================================================================== 00:12:24.947 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:24.947 06:37:04 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:24.947 06:37:04 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:24.947 06:37:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76944' 00:12:24.947 06:37:04 -- common/autotest_common.sh@945 -- # kill 76944 00:12:24.947 06:37:04 -- common/autotest_common.sh@950 -- # wait 76944 00:12:25.206 06:37:04 -- target/tls.sh@37 -- # return 1 00:12:25.206 06:37:04 -- common/autotest_common.sh@643 -- # es=1 00:12:25.206 06:37:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:25.206 06:37:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:25.206 06:37:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:25.206 06:37:04 -- target/tls.sh@183 -- # killprocess 76755 00:12:25.206 06:37:04 -- common/autotest_common.sh@926 -- # '[' -z 76755 ']' 00:12:25.206 06:37:04 -- common/autotest_common.sh@930 -- # kill -0 76755 00:12:25.206 06:37:04 -- common/autotest_common.sh@931 -- # uname 00:12:25.206 06:37:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:25.206 06:37:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76755 00:12:25.206 killing process with pid 76755 00:12:25.206 06:37:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:25.206 06:37:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:25.206 06:37:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76755' 00:12:25.206 06:37:04 -- common/autotest_common.sh@945 -- # kill 76755 00:12:25.206 06:37:04 -- common/autotest_common.sh@950 -- # wait 76755 00:12:25.206 06:37:05 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:12:25.206 06:37:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:25.206 06:37:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:25.206 06:37:05 -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 06:37:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:25.206 06:37:05 -- nvmf/common.sh@469 -- # nvmfpid=76971 00:12:25.206 06:37:05 -- nvmf/common.sh@470 -- # waitforlisten 76971 00:12:25.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.206 06:37:05 -- common/autotest_common.sh@819 -- # '[' -z 76971 ']' 00:12:25.206 06:37:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.206 06:37:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:25.206 06:37:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.206 06:37:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:25.206 06:37:05 -- common/autotest_common.sh@10 -- # set +x 00:12:25.465 [2024-07-12 06:37:05.128524] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:25.465 [2024-07-12 06:37:05.128808] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.465 [2024-07-12 06:37:05.261445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.465 [2024-07-12 06:37:05.292515] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:25.465 [2024-07-12 06:37:05.292885] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.465 [2024-07-12 06:37:05.292936] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.465 [2024-07-12 06:37:05.293066] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.465 [2024-07-12 06:37:05.293188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.401 06:37:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:26.401 06:37:06 -- common/autotest_common.sh@852 -- # return 0 00:12:26.401 06:37:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:26.401 06:37:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:26.401 06:37:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.401 06:37:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.401 06:37:06 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:26.401 06:37:06 -- common/autotest_common.sh@640 -- # local es=0 00:12:26.401 06:37:06 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:26.401 06:37:06 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:12:26.401 06:37:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:26.401 06:37:06 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:12:26.401 06:37:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:26.401 06:37:06 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:26.401 06:37:06 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:26.401 06:37:06 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:26.401 [2024-07-12 06:37:06.299886] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.401 06:37:06 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:26.660 06:37:06 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:26.918 [2024-07-12 06:37:06.768034] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:26.918 [2024-07-12 06:37:06.768282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.918 06:37:06 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:27.177 malloc0 00:12:27.177 06:37:07 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:27.435 06:37:07 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:27.694 [2024-07-12 06:37:07.422664] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:27.694 [2024-07-12 06:37:07.422718] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:27.694 [2024-07-12 06:37:07.422737] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:12:27.694 request: 00:12:27.694 { 00:12:27.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:27.694 "host": "nqn.2016-06.io.spdk:host1", 00:12:27.694 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:27.694 "method": "nvmf_subsystem_add_host", 00:12:27.694 "req_id": 1 00:12:27.694 } 00:12:27.694 Got JSON-RPC error response 00:12:27.694 response: 00:12:27.694 { 00:12:27.694 "code": -32603, 00:12:27.694 "message": "Internal error" 00:12:27.694 } 00:12:27.694 06:37:07 -- common/autotest_common.sh@643 -- # es=1 00:12:27.694 06:37:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:27.694 06:37:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:27.694 06:37:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:27.694 06:37:07 -- target/tls.sh@189 -- # killprocess 76971 00:12:27.694 06:37:07 -- common/autotest_common.sh@926 -- # '[' -z 76971 ']' 00:12:27.694 06:37:07 -- common/autotest_common.sh@930 -- # kill -0 76971 00:12:27.694 06:37:07 -- common/autotest_common.sh@931 -- # uname 00:12:27.694 06:37:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:27.694 06:37:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76971 00:12:27.694 killing process with pid 76971 00:12:27.694 06:37:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:27.694 06:37:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:27.694 06:37:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76971' 00:12:27.694 06:37:07 -- common/autotest_common.sh@945 -- # kill 76971 00:12:27.694 06:37:07 -- common/autotest_common.sh@950 -- # wait 76971 00:12:27.694 06:37:07 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:27.694 06:37:07 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:12:27.694 06:37:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:27.694 06:37:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:27.694 06:37:07 -- common/autotest_common.sh@10 -- # set +x 00:12:27.952 06:37:07 -- nvmf/common.sh@469 -- # nvmfpid=77034 00:12:27.952 06:37:07 -- nvmf/common.sh@470 -- # waitforlisten 77034 00:12:27.952 06:37:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:27.952 06:37:07 -- common/autotest_common.sh@819 -- # '[' -z 77034 ']' 00:12:27.952 06:37:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.952 06:37:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:27.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.952 06:37:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.952 06:37:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:27.952 06:37:07 -- common/autotest_common.sh@10 -- # set +x 00:12:27.952 [2024-07-12 06:37:07.668447] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:27.952 [2024-07-12 06:37:07.668738] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.952 [2024-07-12 06:37:07.811406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.952 [2024-07-12 06:37:07.843223] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:27.952 [2024-07-12 06:37:07.843386] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.952 [2024-07-12 06:37:07.843401] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.952 [2024-07-12 06:37:07.843409] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.952 [2024-07-12 06:37:07.843432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.884 06:37:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:28.884 06:37:08 -- common/autotest_common.sh@852 -- # return 0 00:12:28.884 06:37:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:28.884 06:37:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:28.884 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:12:28.884 06:37:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.884 06:37:08 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:28.884 06:37:08 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:28.884 06:37:08 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:29.141 [2024-07-12 06:37:08.836863] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.141 06:37:08 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:29.406 06:37:09 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:29.665 [2024-07-12 06:37:09.336965] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:29.665 [2024-07-12 06:37:09.337205] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.665 06:37:09 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:29.665 malloc0 00:12:29.665 06:37:09 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:29.922 06:37:09 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:30.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:30.180 06:37:10 -- target/tls.sh@197 -- # bdevperf_pid=77088 00:12:30.180 06:37:10 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:30.180 06:37:10 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:30.180 06:37:10 -- target/tls.sh@200 -- # waitforlisten 77088 /var/tmp/bdevperf.sock 00:12:30.180 06:37:10 -- common/autotest_common.sh@819 -- # '[' -z 77088 ']' 00:12:30.180 06:37:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:30.180 06:37:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:30.180 06:37:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:30.180 06:37:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:30.180 06:37:10 -- common/autotest_common.sh@10 -- # set +x 00:12:30.180 [2024-07-12 06:37:10.062026] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:30.180 [2024-07-12 06:37:10.062306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77088 ] 00:12:30.438 [2024-07-12 06:37:10.193479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.438 [2024-07-12 06:37:10.228594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.374 06:37:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:31.374 06:37:10 -- common/autotest_common.sh@852 -- # return 0 00:12:31.374 06:37:10 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:31.374 [2024-07-12 06:37:11.185231] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:31.374 TLSTESTn1 00:12:31.374 06:37:11 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:31.940 06:37:11 -- target/tls.sh@205 -- # tgtconf='{ 00:12:31.940 "subsystems": [ 00:12:31.940 { 00:12:31.940 "subsystem": "iobuf", 00:12:31.940 "config": [ 00:12:31.940 { 00:12:31.940 "method": "iobuf_set_options", 00:12:31.940 "params": { 00:12:31.940 "small_pool_count": 8192, 00:12:31.940 "large_pool_count": 1024, 00:12:31.940 "small_bufsize": 8192, 00:12:31.940 "large_bufsize": 135168 00:12:31.940 } 00:12:31.940 } 00:12:31.940 ] 00:12:31.940 }, 00:12:31.940 { 00:12:31.940 "subsystem": "sock", 00:12:31.940 "config": [ 00:12:31.940 { 00:12:31.940 "method": "sock_impl_set_options", 00:12:31.940 "params": { 00:12:31.940 "impl_name": "uring", 00:12:31.941 "recv_buf_size": 2097152, 00:12:31.941 "send_buf_size": 2097152, 00:12:31.941 "enable_recv_pipe": true, 00:12:31.941 "enable_quickack": false, 00:12:31.941 "enable_placement_id": 0, 00:12:31.941 "enable_zerocopy_send_server": false, 00:12:31.941 "enable_zerocopy_send_client": false, 00:12:31.941 "zerocopy_threshold": 0, 00:12:31.941 "tls_version": 0, 00:12:31.941 "enable_ktls": false 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "sock_impl_set_options", 00:12:31.941 "params": { 00:12:31.941 "impl_name": "posix", 00:12:31.941 "recv_buf_size": 2097152, 00:12:31.941 "send_buf_size": 2097152, 00:12:31.941 "enable_recv_pipe": true, 00:12:31.941 "enable_quickack": false, 00:12:31.941 "enable_placement_id": 0, 00:12:31.941 "enable_zerocopy_send_server": true, 00:12:31.941 "enable_zerocopy_send_client": false, 00:12:31.941 "zerocopy_threshold": 0, 00:12:31.941 "tls_version": 0, 00:12:31.941 "enable_ktls": false 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "sock_impl_set_options", 00:12:31.941 "params": { 00:12:31.941 "impl_name": "ssl", 00:12:31.941 "recv_buf_size": 4096, 00:12:31.941 "send_buf_size": 4096, 00:12:31.941 "enable_recv_pipe": true, 00:12:31.941 "enable_quickack": false, 00:12:31.941 "enable_placement_id": 0, 00:12:31.941 "enable_zerocopy_send_server": true, 00:12:31.941 "enable_zerocopy_send_client": false, 00:12:31.941 "zerocopy_threshold": 0, 00:12:31.941 "tls_version": 0, 00:12:31.941 "enable_ktls": false 00:12:31.941 } 00:12:31.941 } 00:12:31.941 ] 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "subsystem": "vmd", 00:12:31.941 "config": [] 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "subsystem": "accel", 00:12:31.941 "config": [ 00:12:31.941 { 00:12:31.941 "method": "accel_set_options", 00:12:31.941 "params": { 00:12:31.941 "small_cache_size": 128, 00:12:31.941 "large_cache_size": 16, 00:12:31.941 "task_count": 2048, 00:12:31.941 "sequence_count": 2048, 00:12:31.941 "buf_count": 2048 00:12:31.941 } 00:12:31.941 } 00:12:31.941 ] 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "subsystem": "bdev", 00:12:31.941 "config": [ 00:12:31.941 { 00:12:31.941 "method": "bdev_set_options", 00:12:31.941 "params": { 00:12:31.941 "bdev_io_pool_size": 65535, 00:12:31.941 "bdev_io_cache_size": 256, 00:12:31.941 "bdev_auto_examine": true, 00:12:31.941 "iobuf_small_cache_size": 128, 00:12:31.941 "iobuf_large_cache_size": 16 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "bdev_raid_set_options", 00:12:31.941 "params": { 00:12:31.941 "process_window_size_kb": 1024 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "bdev_iscsi_set_options", 00:12:31.941 "params": { 00:12:31.941 "timeout_sec": 30 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "bdev_nvme_set_options", 00:12:31.941 "params": { 00:12:31.941 "action_on_timeout": "none", 00:12:31.941 "timeout_us": 0, 00:12:31.941 "timeout_admin_us": 0, 00:12:31.941 "keep_alive_timeout_ms": 10000, 00:12:31.941 "transport_retry_count": 4, 00:12:31.941 "arbitration_burst": 0, 00:12:31.941 "low_priority_weight": 0, 00:12:31.941 "medium_priority_weight": 0, 00:12:31.941 "high_priority_weight": 0, 00:12:31.941 "nvme_adminq_poll_period_us": 10000, 00:12:31.941 "nvme_ioq_poll_period_us": 0, 00:12:31.941 "io_queue_requests": 0, 00:12:31.941 "delay_cmd_submit": true, 00:12:31.941 "bdev_retry_count": 3, 00:12:31.941 "transport_ack_timeout": 0, 00:12:31.941 "ctrlr_loss_timeout_sec": 0, 00:12:31.941 "reconnect_delay_sec": 0, 00:12:31.941 "fast_io_fail_timeout_sec": 0, 00:12:31.941 "generate_uuids": false, 00:12:31.941 "transport_tos": 0, 00:12:31.941 "io_path_stat": false, 00:12:31.941 "allow_accel_sequence": false 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "bdev_nvme_set_hotplug", 00:12:31.941 "params": { 00:12:31.941 "period_us": 100000, 00:12:31.941 "enable": false 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "bdev_malloc_create", 00:12:31.941 "params": { 00:12:31.941 "name": "malloc0", 00:12:31.941 "num_blocks": 8192, 00:12:31.941 "block_size": 4096, 00:12:31.941 "physical_block_size": 4096, 00:12:31.941 "uuid": "7204f361-b0ff-44cc-ae74-a2208039f01c", 00:12:31.941 "optimal_io_boundary": 0 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "bdev_wait_for_examine" 00:12:31.941 } 00:12:31.941 ] 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "subsystem": "nbd", 00:12:31.941 "config": [] 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "subsystem": "scheduler", 00:12:31.941 "config": [ 00:12:31.941 { 00:12:31.941 "method": "framework_set_scheduler", 00:12:31.941 "params": { 00:12:31.941 "name": "static" 00:12:31.941 } 00:12:31.941 } 00:12:31.941 ] 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "subsystem": "nvmf", 00:12:31.941 "config": [ 00:12:31.941 { 00:12:31.941 "method": "nvmf_set_config", 00:12:31.941 "params": { 00:12:31.941 "discovery_filter": "match_any", 00:12:31.941 "admin_cmd_passthru": { 00:12:31.941 "identify_ctrlr": false 00:12:31.941 } 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "nvmf_set_max_subsystems", 00:12:31.941 "params": { 00:12:31.941 "max_subsystems": 1024 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "nvmf_set_crdt", 00:12:31.941 "params": { 00:12:31.941 "crdt1": 0, 00:12:31.941 "crdt2": 0, 00:12:31.941 "crdt3": 0 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "nvmf_create_transport", 00:12:31.941 "params": { 00:12:31.941 "trtype": "TCP", 00:12:31.941 "max_queue_depth": 128, 00:12:31.941 "max_io_qpairs_per_ctrlr": 127, 00:12:31.941 "in_capsule_data_size": 4096, 00:12:31.941 "max_io_size": 131072, 00:12:31.941 "io_unit_size": 131072, 00:12:31.941 "max_aq_depth": 128, 00:12:31.941 "num_shared_buffers": 511, 00:12:31.941 "buf_cache_size": 4294967295, 00:12:31.941 "dif_insert_or_strip": false, 00:12:31.941 "zcopy": false, 00:12:31.941 "c2h_success": false, 00:12:31.941 "sock_priority": 0, 00:12:31.941 "abort_timeout_sec": 1 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "nvmf_create_subsystem", 00:12:31.941 "params": { 00:12:31.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.941 "allow_any_host": false, 00:12:31.941 "serial_number": "SPDK00000000000001", 00:12:31.941 "model_number": "SPDK bdev Controller", 00:12:31.941 "max_namespaces": 10, 00:12:31.941 "min_cntlid": 1, 00:12:31.941 "max_cntlid": 65519, 00:12:31.941 "ana_reporting": false 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "nvmf_subsystem_add_host", 00:12:31.941 "params": { 00:12:31.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.941 "host": "nqn.2016-06.io.spdk:host1", 00:12:31.941 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "nvmf_subsystem_add_ns", 00:12:31.941 "params": { 00:12:31.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.941 "namespace": { 00:12:31.941 "nsid": 1, 00:12:31.941 "bdev_name": "malloc0", 00:12:31.941 "nguid": "7204F361B0FF44CCAE74A2208039F01C", 00:12:31.941 "uuid": "7204f361-b0ff-44cc-ae74-a2208039f01c" 00:12:31.941 } 00:12:31.941 } 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "method": "nvmf_subsystem_add_listener", 00:12:31.941 "params": { 00:12:31.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.941 "listen_address": { 00:12:31.941 "trtype": "TCP", 00:12:31.941 "adrfam": "IPv4", 00:12:31.941 "traddr": "10.0.0.2", 00:12:31.941 "trsvcid": "4420" 00:12:31.942 }, 00:12:31.942 "secure_channel": true 00:12:31.942 } 00:12:31.942 } 00:12:31.942 ] 00:12:31.942 } 00:12:31.942 ] 00:12:31.942 }' 00:12:31.942 06:37:11 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:32.201 06:37:11 -- target/tls.sh@206 -- # bdevperfconf='{ 00:12:32.201 "subsystems": [ 00:12:32.201 { 00:12:32.201 "subsystem": "iobuf", 00:12:32.201 "config": [ 00:12:32.201 { 00:12:32.201 "method": "iobuf_set_options", 00:12:32.201 "params": { 00:12:32.201 "small_pool_count": 8192, 00:12:32.201 "large_pool_count": 1024, 00:12:32.201 "small_bufsize": 8192, 00:12:32.201 "large_bufsize": 135168 00:12:32.201 } 00:12:32.201 } 00:12:32.201 ] 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "subsystem": "sock", 00:12:32.201 "config": [ 00:12:32.201 { 00:12:32.201 "method": "sock_impl_set_options", 00:12:32.201 "params": { 00:12:32.201 "impl_name": "uring", 00:12:32.201 "recv_buf_size": 2097152, 00:12:32.201 "send_buf_size": 2097152, 00:12:32.201 "enable_recv_pipe": true, 00:12:32.201 "enable_quickack": false, 00:12:32.201 "enable_placement_id": 0, 00:12:32.201 "enable_zerocopy_send_server": false, 00:12:32.201 "enable_zerocopy_send_client": false, 00:12:32.201 "zerocopy_threshold": 0, 00:12:32.201 "tls_version": 0, 00:12:32.201 "enable_ktls": false 00:12:32.201 } 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "method": "sock_impl_set_options", 00:12:32.201 "params": { 00:12:32.201 "impl_name": "posix", 00:12:32.201 "recv_buf_size": 2097152, 00:12:32.201 "send_buf_size": 2097152, 00:12:32.201 "enable_recv_pipe": true, 00:12:32.201 "enable_quickack": false, 00:12:32.201 "enable_placement_id": 0, 00:12:32.201 "enable_zerocopy_send_server": true, 00:12:32.201 "enable_zerocopy_send_client": false, 00:12:32.201 "zerocopy_threshold": 0, 00:12:32.201 "tls_version": 0, 00:12:32.201 "enable_ktls": false 00:12:32.201 } 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "method": "sock_impl_set_options", 00:12:32.201 "params": { 00:12:32.201 "impl_name": "ssl", 00:12:32.201 "recv_buf_size": 4096, 00:12:32.201 "send_buf_size": 4096, 00:12:32.201 "enable_recv_pipe": true, 00:12:32.201 "enable_quickack": false, 00:12:32.201 "enable_placement_id": 0, 00:12:32.201 "enable_zerocopy_send_server": true, 00:12:32.201 "enable_zerocopy_send_client": false, 00:12:32.201 "zerocopy_threshold": 0, 00:12:32.201 "tls_version": 0, 00:12:32.201 "enable_ktls": false 00:12:32.201 } 00:12:32.201 } 00:12:32.201 ] 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "subsystem": "vmd", 00:12:32.201 "config": [] 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "subsystem": "accel", 00:12:32.201 "config": [ 00:12:32.201 { 00:12:32.201 "method": "accel_set_options", 00:12:32.201 "params": { 00:12:32.201 "small_cache_size": 128, 00:12:32.201 "large_cache_size": 16, 00:12:32.201 "task_count": 2048, 00:12:32.201 "sequence_count": 2048, 00:12:32.201 "buf_count": 2048 00:12:32.201 } 00:12:32.201 } 00:12:32.201 ] 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "subsystem": "bdev", 00:12:32.201 "config": [ 00:12:32.201 { 00:12:32.201 "method": "bdev_set_options", 00:12:32.201 "params": { 00:12:32.201 "bdev_io_pool_size": 65535, 00:12:32.201 "bdev_io_cache_size": 256, 00:12:32.201 "bdev_auto_examine": true, 00:12:32.201 "iobuf_small_cache_size": 128, 00:12:32.201 "iobuf_large_cache_size": 16 00:12:32.201 } 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "method": "bdev_raid_set_options", 00:12:32.201 "params": { 00:12:32.201 "process_window_size_kb": 1024 00:12:32.201 } 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "method": "bdev_iscsi_set_options", 00:12:32.201 "params": { 00:12:32.201 "timeout_sec": 30 00:12:32.201 } 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "method": "bdev_nvme_set_options", 00:12:32.201 "params": { 00:12:32.201 "action_on_timeout": "none", 00:12:32.201 "timeout_us": 0, 00:12:32.201 "timeout_admin_us": 0, 00:12:32.201 "keep_alive_timeout_ms": 10000, 00:12:32.201 "transport_retry_count": 4, 00:12:32.201 "arbitration_burst": 0, 00:12:32.201 "low_priority_weight": 0, 00:12:32.201 "medium_priority_weight": 0, 00:12:32.201 "high_priority_weight": 0, 00:12:32.201 "nvme_adminq_poll_period_us": 10000, 00:12:32.201 "nvme_ioq_poll_period_us": 0, 00:12:32.201 "io_queue_requests": 512, 00:12:32.201 "delay_cmd_submit": true, 00:12:32.201 "bdev_retry_count": 3, 00:12:32.201 "transport_ack_timeout": 0, 00:12:32.201 "ctrlr_loss_timeout_sec": 0, 00:12:32.201 "reconnect_delay_sec": 0, 00:12:32.201 "fast_io_fail_timeout_sec": 0, 00:12:32.201 "generate_uuids": false, 00:12:32.201 "transport_tos": 0, 00:12:32.201 "io_path_stat": false, 00:12:32.201 "allow_accel_sequence": false 00:12:32.201 } 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "method": "bdev_nvme_attach_controller", 00:12:32.201 "params": { 00:12:32.201 "name": "TLSTEST", 00:12:32.201 "trtype": "TCP", 00:12:32.201 "adrfam": "IPv4", 00:12:32.201 "traddr": "10.0.0.2", 00:12:32.201 "trsvcid": "4420", 00:12:32.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.201 "prchk_reftag": false, 00:12:32.201 "prchk_guard": false, 00:12:32.201 "ctrlr_loss_timeout_sec": 0, 00:12:32.201 "reconnect_delay_sec": 0, 00:12:32.201 "fast_io_fail_timeout_sec": 0, 00:12:32.201 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:32.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.201 "hdgst": false, 00:12:32.201 "ddgst": false 00:12:32.201 } 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "method": "bdev_nvme_set_hotplug", 00:12:32.201 "params": { 00:12:32.201 "period_us": 100000, 00:12:32.201 "enable": false 00:12:32.201 } 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "method": "bdev_wait_for_examine" 00:12:32.201 } 00:12:32.201 ] 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "subsystem": "nbd", 00:12:32.201 "config": [] 00:12:32.201 } 00:12:32.201 ] 00:12:32.201 }' 00:12:32.201 06:37:11 -- target/tls.sh@208 -- # killprocess 77088 00:12:32.201 06:37:11 -- common/autotest_common.sh@926 -- # '[' -z 77088 ']' 00:12:32.201 06:37:11 -- common/autotest_common.sh@930 -- # kill -0 77088 00:12:32.201 06:37:11 -- common/autotest_common.sh@931 -- # uname 00:12:32.201 06:37:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:32.201 06:37:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77088 00:12:32.201 killing process with pid 77088 00:12:32.201 Received shutdown signal, test time was about 10.000000 seconds 00:12:32.201 00:12:32.201 Latency(us) 00:12:32.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.201 =================================================================================================================== 00:12:32.201 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:32.201 06:37:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:32.201 06:37:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:32.201 06:37:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77088' 00:12:32.201 06:37:11 -- common/autotest_common.sh@945 -- # kill 77088 00:12:32.201 06:37:11 -- common/autotest_common.sh@950 -- # wait 77088 00:12:32.201 06:37:12 -- target/tls.sh@209 -- # killprocess 77034 00:12:32.201 06:37:12 -- common/autotest_common.sh@926 -- # '[' -z 77034 ']' 00:12:32.201 06:37:12 -- common/autotest_common.sh@930 -- # kill -0 77034 00:12:32.201 06:37:12 -- common/autotest_common.sh@931 -- # uname 00:12:32.202 06:37:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:32.202 06:37:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77034 00:12:32.202 killing process with pid 77034 00:12:32.202 06:37:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:32.202 06:37:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:32.202 06:37:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77034' 00:12:32.202 06:37:12 -- common/autotest_common.sh@945 -- # kill 77034 00:12:32.202 06:37:12 -- common/autotest_common.sh@950 -- # wait 77034 00:12:32.460 06:37:12 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:32.460 06:37:12 -- target/tls.sh@212 -- # echo '{ 00:12:32.460 "subsystems": [ 00:12:32.460 { 00:12:32.460 "subsystem": "iobuf", 00:12:32.460 "config": [ 00:12:32.460 { 00:12:32.460 "method": "iobuf_set_options", 00:12:32.460 "params": { 00:12:32.460 "small_pool_count": 8192, 00:12:32.460 "large_pool_count": 1024, 00:12:32.460 "small_bufsize": 8192, 00:12:32.460 "large_bufsize": 135168 00:12:32.460 } 00:12:32.460 } 00:12:32.460 ] 00:12:32.460 }, 00:12:32.460 { 00:12:32.460 "subsystem": "sock", 00:12:32.460 "config": [ 00:12:32.460 { 00:12:32.460 "method": "sock_impl_set_options", 00:12:32.460 "params": { 00:12:32.460 "impl_name": "uring", 00:12:32.460 "recv_buf_size": 2097152, 00:12:32.460 "send_buf_size": 2097152, 00:12:32.460 "enable_recv_pipe": true, 00:12:32.461 "enable_quickack": false, 00:12:32.461 "enable_placement_id": 0, 00:12:32.461 "enable_zerocopy_send_server": false, 00:12:32.461 "enable_zerocopy_send_client": false, 00:12:32.461 "zerocopy_threshold": 0, 00:12:32.461 "tls_version": 0, 00:12:32.461 "enable_ktls": false 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "sock_impl_set_options", 00:12:32.461 "params": { 00:12:32.461 "impl_name": "posix", 00:12:32.461 "recv_buf_size": 2097152, 00:12:32.461 "send_buf_size": 2097152, 00:12:32.461 "enable_recv_pipe": true, 00:12:32.461 "enable_quickack": false, 00:12:32.461 "enable_placement_id": 0, 00:12:32.461 "enable_zerocopy_send_server": true, 00:12:32.461 "enable_zerocopy_send_client": false, 00:12:32.461 "zerocopy_threshold": 0, 00:12:32.461 "tls_version": 0, 00:12:32.461 "enable_ktls": false 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "sock_impl_set_options", 00:12:32.461 "params": { 00:12:32.461 "impl_name": "ssl", 00:12:32.461 "recv_buf_size": 4096, 00:12:32.461 "send_buf_size": 4096, 00:12:32.461 "enable_recv_pipe": true, 00:12:32.461 "enable_quickack": false, 00:12:32.461 "enable_placement_id": 0, 00:12:32.461 "enable_zerocopy_send_server": true, 00:12:32.461 "enable_zerocopy_send_client": false, 00:12:32.461 "zerocopy_threshold": 0, 00:12:32.461 "tls_version": 0, 00:12:32.461 "enable_ktls": false 00:12:32.461 } 00:12:32.461 } 00:12:32.461 ] 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "subsystem": "vmd", 00:12:32.461 "config": [] 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "subsystem": "accel", 00:12:32.461 "config": [ 00:12:32.461 { 00:12:32.461 "method": "accel_set_options", 00:12:32.461 "params": { 00:12:32.461 "small_cache_size": 128, 00:12:32.461 "large_cache_size": 16, 00:12:32.461 "task_count": 2048, 00:12:32.461 "sequence_count": 2048, 00:12:32.461 "buf_count": 2048 00:12:32.461 } 00:12:32.461 } 00:12:32.461 ] 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "subsystem": "bdev", 00:12:32.461 "config": [ 00:12:32.461 { 00:12:32.461 "method": "bdev_set_options", 00:12:32.461 "params": { 00:12:32.461 "bdev_io_pool_size": 65535, 00:12:32.461 "bdev_io_cache_size": 256, 00:12:32.461 "bdev_auto_examine": true, 00:12:32.461 "iobuf_small_cache_size": 128, 00:12:32.461 "iobuf_large_cache_size": 16 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "bdev_raid_set_options", 00:12:32.461 "params": { 00:12:32.461 "process_window_size_kb": 1024 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "bdev_iscsi_set_options", 00:12:32.461 "params": { 00:12:32.461 "timeout_sec": 30 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "bdev_nvme_set_options", 00:12:32.461 "params": { 00:12:32.461 "action_on_timeout": "none", 00:12:32.461 "timeout_us": 0, 00:12:32.461 "timeout_admin_us": 0, 00:12:32.461 "keep_alive_timeout_ms": 10000, 00:12:32.461 "transport_retry_count": 4, 00:12:32.461 "arbitration_burst": 0, 00:12:32.461 "low_priority_weight": 0, 00:12:32.461 "medium_priority_weight": 0, 00:12:32.461 "high_priority_weight": 0, 00:12:32.461 "nvme_adminq_poll_period_us": 10000, 00:12:32.461 "nvme_ioq_poll_period_us": 0, 00:12:32.461 "io_queue_requests": 0, 00:12:32.461 "delay_cmd_submit": true, 00:12:32.461 "bdev_retry_count": 3, 00:12:32.461 "transport_ack_timeout": 0, 00:12:32.461 "ctrlr_loss_timeout_sec": 0, 00:12:32.461 "reconnect_delay_sec": 0, 00:12:32.461 "fast_io_fail_timeout_sec": 0, 00:12:32.461 "generate_uuids": false, 00:12:32.461 "transport_tos": 0, 00:12:32.461 "io_path_stat": false, 00:12:32.461 "allow_accel_sequence": false 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "bdev_nvme_set_hotplug", 00:12:32.461 "params": { 00:12:32.461 "period_us": 100000, 00:12:32.461 "enable": false 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "bdev_malloc_create", 00:12:32.461 "params": { 00:12:32.461 "name": "malloc0", 00:12:32.461 "num_blocks": 8192, 00:12:32.461 "block_size": 4096, 00:12:32.461 "physical_block_size": 4096, 00:12:32.461 "uuid": "7204f361-b0ff-44cc-ae74-a2208039f01c", 00:12:32.461 "optimal_io_boundary": 0 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "bdev_wait_for_examine" 00:12:32.461 } 00:12:32.461 ] 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "subsystem": "nbd", 00:12:32.461 "config": [] 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "subsystem": "scheduler", 00:12:32.461 "config": [ 00:12:32.461 { 00:12:32.461 "method": "framework_set_scheduler", 00:12:32.461 "params": { 00:12:32.461 "name": "static" 00:12:32.461 } 00:12:32.461 } 00:12:32.461 ] 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "subsystem": "nvmf", 00:12:32.461 "config": [ 00:12:32.461 { 00:12:32.461 "method": "nvmf_set_config", 00:12:32.461 "params": { 00:12:32.461 "discovery_filter": "match_any", 00:12:32.461 "admin_cmd_passthru": { 00:12:32.461 "identify_ctrlr": false 00:12:32.461 } 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "nvmf_set_max_subsystems", 00:12:32.461 "params": { 00:12:32.461 "max_subsystems": 1024 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "nvmf_set_crdt", 00:12:32.461 "params": { 00:12:32.461 "crdt1": 0, 00:12:32.461 "crdt2": 0, 00:12:32.461 "crdt3": 0 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "nvmf_create_transport", 00:12:32.461 "params": { 00:12:32.461 "trtype": "TCP", 00:12:32.461 "max_queue_depth": 128, 00:12:32.461 "max_io_qpairs_per_ctrlr": 127, 00:12:32.461 "in_capsule_data_size": 4096, 00:12:32.461 "max_io_size": 131072, 00:12:32.461 "io_unit_size": 131072, 00:12:32.461 "max_aq_depth": 128, 00:12:32.461 "num_shared_buffers": 511, 00:12:32.461 "buf_cache_size": 4294967295, 00:12:32.461 "dif_insert_or_strip": false, 00:12:32.461 "zcopy": false, 00:12:32.461 "c2h_success": false, 00:12:32.461 "sock_priority": 0, 00:12:32.461 "abort_timeout_sec": 1 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "nvmf_create_subsystem", 00:12:32.461 "params": { 00:12:32.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.461 "allow_any_host": false, 00:12:32.461 "serial_number": "SPDK00000000000001", 00:12:32.461 "model_number": "SPDK bdev Controller", 00:12:32.461 "max_namespaces": 10, 00:12:32.461 "min_cntlid": 1, 00:12:32.461 "max_cntlid": 65519, 00:12:32.461 "ana_reporting": false 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "nvmf_subsystem_add_host", 00:12:32.461 "params": { 00:12:32.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.461 "host": "nqn.2016-06.io.spdk:host1", 00:12:32.461 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "nvmf_subsystem_add_ns", 00:12:32.461 "params": { 00:12:32.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.461 "namespace": { 00:12:32.461 "nsid": 1, 00:12:32.461 "bdev_name": "malloc0", 00:12:32.461 "nguid": "7204F361B0FF44CCAE74A2208039F01C", 00:12:32.461 "uuid": "7204f361-b0ff-44cc-ae74-a2208039f01c" 00:12:32.461 } 00:12:32.461 } 00:12:32.461 }, 00:12:32.461 { 00:12:32.461 "method": "nvmf_subsystem_add_listener", 00:12:32.461 "params": { 00:12:32.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.461 "listen_address": { 00:12:32.461 "trtype": "TCP", 00:12:32.461 "adrfam": "IPv4", 00:12:32.461 "traddr": "10.0.0.2", 00:12:32.461 "trsvcid": "4420" 00:12:32.461 }, 00:12:32.461 "secure_channel": true 00:12:32.461 } 00:12:32.461 } 00:12:32.461 ] 00:12:32.461 } 00:12:32.461 ] 00:12:32.461 }' 00:12:32.461 06:37:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:32.461 06:37:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:32.461 06:37:12 -- common/autotest_common.sh@10 -- # set +x 00:12:32.462 06:37:12 -- nvmf/common.sh@469 -- # nvmfpid=77131 00:12:32.462 06:37:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:32.462 06:37:12 -- nvmf/common.sh@470 -- # waitforlisten 77131 00:12:32.462 06:37:12 -- common/autotest_common.sh@819 -- # '[' -z 77131 ']' 00:12:32.462 06:37:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.462 06:37:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:32.462 06:37:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.462 06:37:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:32.462 06:37:12 -- common/autotest_common.sh@10 -- # set +x 00:12:32.462 [2024-07-12 06:37:12.280392] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:32.462 [2024-07-12 06:37:12.280496] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.721 [2024-07-12 06:37:12.416779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.721 [2024-07-12 06:37:12.449106] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:32.721 [2024-07-12 06:37:12.449507] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.721 [2024-07-12 06:37:12.449529] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.721 [2024-07-12 06:37:12.449538] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.721 [2024-07-12 06:37:12.449564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.721 [2024-07-12 06:37:12.626392] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.980 [2024-07-12 06:37:12.658276] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:32.980 [2024-07-12 06:37:12.658546] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.547 06:37:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:33.547 06:37:13 -- common/autotest_common.sh@852 -- # return 0 00:12:33.547 06:37:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:33.547 06:37:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:33.547 06:37:13 -- common/autotest_common.sh@10 -- # set +x 00:12:33.547 06:37:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.547 06:37:13 -- target/tls.sh@216 -- # bdevperf_pid=77162 00:12:33.547 06:37:13 -- target/tls.sh@217 -- # waitforlisten 77162 /var/tmp/bdevperf.sock 00:12:33.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:33.547 06:37:13 -- common/autotest_common.sh@819 -- # '[' -z 77162 ']' 00:12:33.547 06:37:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:33.547 06:37:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:33.547 06:37:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:33.547 06:37:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:33.547 06:37:13 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:33.547 06:37:13 -- common/autotest_common.sh@10 -- # set +x 00:12:33.547 06:37:13 -- target/tls.sh@213 -- # echo '{ 00:12:33.547 "subsystems": [ 00:12:33.547 { 00:12:33.547 "subsystem": "iobuf", 00:12:33.547 "config": [ 00:12:33.547 { 00:12:33.547 "method": "iobuf_set_options", 00:12:33.547 "params": { 00:12:33.547 "small_pool_count": 8192, 00:12:33.547 "large_pool_count": 1024, 00:12:33.547 "small_bufsize": 8192, 00:12:33.547 "large_bufsize": 135168 00:12:33.547 } 00:12:33.547 } 00:12:33.547 ] 00:12:33.547 }, 00:12:33.547 { 00:12:33.547 "subsystem": "sock", 00:12:33.547 "config": [ 00:12:33.547 { 00:12:33.547 "method": "sock_impl_set_options", 00:12:33.547 "params": { 00:12:33.547 "impl_name": "uring", 00:12:33.547 "recv_buf_size": 2097152, 00:12:33.547 "send_buf_size": 2097152, 00:12:33.547 "enable_recv_pipe": true, 00:12:33.547 "enable_quickack": false, 00:12:33.547 "enable_placement_id": 0, 00:12:33.547 "enable_zerocopy_send_server": false, 00:12:33.547 "enable_zerocopy_send_client": false, 00:12:33.547 "zerocopy_threshold": 0, 00:12:33.547 "tls_version": 0, 00:12:33.547 "enable_ktls": false 00:12:33.547 } 00:12:33.547 }, 00:12:33.548 { 00:12:33.548 "method": "sock_impl_set_options", 00:12:33.548 "params": { 00:12:33.548 "impl_name": "posix", 00:12:33.548 "recv_buf_size": 2097152, 00:12:33.548 "send_buf_size": 2097152, 00:12:33.548 "enable_recv_pipe": true, 00:12:33.548 "enable_quickack": false, 00:12:33.548 "enable_placement_id": 0, 00:12:33.548 "enable_zerocopy_send_server": true, 00:12:33.548 "enable_zerocopy_send_client": false, 00:12:33.548 "zerocopy_threshold": 0, 00:12:33.548 "tls_version": 0, 00:12:33.548 "enable_ktls": false 00:12:33.548 } 00:12:33.548 }, 00:12:33.548 { 00:12:33.548 "method": "sock_impl_set_options", 00:12:33.548 "params": { 00:12:33.548 "impl_name": "ssl", 00:12:33.548 "recv_buf_size": 4096, 00:12:33.548 "send_buf_size": 4096, 00:12:33.548 "enable_recv_pipe": true, 00:12:33.548 "enable_quickack": false, 00:12:33.548 "enable_placement_id": 0, 00:12:33.548 "enable_zerocopy_send_server": true, 00:12:33.548 "enable_zerocopy_send_client": false, 00:12:33.548 "zerocopy_threshold": 0, 00:12:33.548 "tls_version": 0, 00:12:33.548 "enable_ktls": false 00:12:33.548 } 00:12:33.548 } 00:12:33.548 ] 00:12:33.548 }, 00:12:33.548 { 00:12:33.548 "subsystem": "vmd", 00:12:33.548 "config": [] 00:12:33.548 }, 00:12:33.548 { 00:12:33.548 "subsystem": "accel", 00:12:33.548 "config": [ 00:12:33.548 { 00:12:33.548 "method": "accel_set_options", 00:12:33.548 "params": { 00:12:33.548 "small_cache_size": 128, 00:12:33.548 "large_cache_size": 16, 00:12:33.548 "task_count": 2048, 00:12:33.548 "sequence_count": 2048, 00:12:33.548 "buf_count": 2048 00:12:33.548 } 00:12:33.548 } 00:12:33.548 ] 00:12:33.548 }, 00:12:33.548 { 00:12:33.548 "subsystem": "bdev", 00:12:33.548 "config": [ 00:12:33.548 { 00:12:33.548 "method": "bdev_set_options", 00:12:33.548 "params": { 00:12:33.548 "bdev_io_pool_size": 65535, 00:12:33.548 "bdev_io_cache_size": 256, 00:12:33.548 "bdev_auto_examine": true, 00:12:33.548 "iobuf_small_cache_size": 128, 00:12:33.548 "iobuf_large_cache_size": 16 00:12:33.548 } 00:12:33.548 }, 00:12:33.548 { 00:12:33.548 "method": "bdev_raid_set_options", 00:12:33.548 "params": { 00:12:33.548 "process_window_size_kb": 1024 00:12:33.548 } 00:12:33.548 }, 00:12:33.548 { 00:12:33.548 "method": "bdev_iscsi_set_options", 00:12:33.548 "params": { 00:12:33.548 "timeout_sec": 30 00:12:33.548 } 00:12:33.548 }, 00:12:33.548 { 00:12:33.548 "method": "bdev_nvme_set_options", 00:12:33.548 "params": { 00:12:33.548 "action_on_timeout": "none", 00:12:33.548 "timeout_us": 0, 00:12:33.548 "timeout_admin_us": 0, 00:12:33.548 "keep_alive_timeout_ms": 10000, 00:12:33.548 "transport_retry_count": 4, 00:12:33.548 "arbitration_burst": 0, 00:12:33.548 "low_priority_weight": 0, 00:12:33.548 "medium_priority_weight": 0, 00:12:33.548 "high_priority_weight": 0, 00:12:33.548 "nvme_adminq_poll_period_us": 10000, 00:12:33.548 "nvme_ioq_poll_period_us": 0, 00:12:33.548 "io_queue_requests": 512, 00:12:33.548 "delay_cmd_submit": true, 00:12:33.548 "bdev_retry_count": 3, 00:12:33.548 "transport_ack_timeout": 0, 00:12:33.548 "ctrlr_loss_timeout_sec": 0, 00:12:33.548 "reconnect_delay_sec": 0, 00:12:33.548 "fast_io_fail_timeout_sec": 0, 00:12:33.548 "generate_uuids": false, 00:12:33.548 "transport_tos": 0, 00:12:33.548 "io_path_stat": false, 00:12:33.548 "allow_accel_sequence": false 00:12:33.548 } 00:12:33.548 }, 00:12:33.548 { 00:12:33.548 "method": "bdev_nvme_attach_controller", 00:12:33.548 "params": { 00:12:33.548 "name": "TLSTEST", 00:12:33.548 "trtype": "TCP", 00:12:33.548 "adrfam": "IPv4", 00:12:33.548 "traddr": "10.0.0.2", 00:12:33.548 "trsvcid": "4420", 00:12:33.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.548 "prchk_reftag": false, 00:12:33.548 "prchk_guard": false, 00:12:33.548 "ctrlr_loss_timeout_sec": 0, 00:12:33.548 "reconnect_delay_sec": 0, 00:12:33.548 "fast_io_fail_timeout_sec": 0, 00:12:33.548 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:33.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.548 "hdgst": false, 00:12:33.548 "ddgst": false 00:12:33.548 } 00:12:33.548 }, 00:12:33.548 { 00:12:33.548 "method": "bdev_nvme_set_hotplug", 00:12:33.548 "params": { 00:12:33.548 "period_us": 100000, 00:12:33.548 "enable": false 00:12:33.548 } 00:12:33.548 }, 00:12:33.548 { 00:12:33.548 "method": "bdev_wait_for_examine" 00:12:33.548 } 00:12:33.548 ] 00:12:33.548 }, 00:12:33.548 { 00:12:33.548 "subsystem": "nbd", 00:12:33.548 "config": [] 00:12:33.548 } 00:12:33.548 ] 00:12:33.548 }' 00:12:33.548 [2024-07-12 06:37:13.298137] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:33.548 [2024-07-12 06:37:13.298911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77162 ] 00:12:33.548 [2024-07-12 06:37:13.441798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.806 [2024-07-12 06:37:13.481769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.806 [2024-07-12 06:37:13.607172] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:34.372 06:37:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:34.372 06:37:14 -- common/autotest_common.sh@852 -- # return 0 00:12:34.372 06:37:14 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:34.630 Running I/O for 10 seconds... 00:12:44.605 00:12:44.605 Latency(us) 00:12:44.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.605 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:44.605 Verification LBA range: start 0x0 length 0x2000 00:12:44.605 TLSTESTn1 : 10.01 5810.46 22.70 0.00 0.00 21994.18 4110.89 25737.77 00:12:44.605 =================================================================================================================== 00:12:44.605 Total : 5810.46 22.70 0.00 0.00 21994.18 4110.89 25737.77 00:12:44.605 0 00:12:44.605 06:37:24 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:44.605 06:37:24 -- target/tls.sh@223 -- # killprocess 77162 00:12:44.605 06:37:24 -- common/autotest_common.sh@926 -- # '[' -z 77162 ']' 00:12:44.605 06:37:24 -- common/autotest_common.sh@930 -- # kill -0 77162 00:12:44.605 06:37:24 -- common/autotest_common.sh@931 -- # uname 00:12:44.605 06:37:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:44.605 06:37:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77162 00:12:44.605 killing process with pid 77162 00:12:44.605 Received shutdown signal, test time was about 10.000000 seconds 00:12:44.605 00:12:44.605 Latency(us) 00:12:44.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.605 =================================================================================================================== 00:12:44.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:44.605 06:37:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:44.605 06:37:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:44.605 06:37:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77162' 00:12:44.605 06:37:24 -- common/autotest_common.sh@945 -- # kill 77162 00:12:44.605 06:37:24 -- common/autotest_common.sh@950 -- # wait 77162 00:12:44.863 06:37:24 -- target/tls.sh@224 -- # killprocess 77131 00:12:44.863 06:37:24 -- common/autotest_common.sh@926 -- # '[' -z 77131 ']' 00:12:44.863 06:37:24 -- common/autotest_common.sh@930 -- # kill -0 77131 00:12:44.863 06:37:24 -- common/autotest_common.sh@931 -- # uname 00:12:44.863 06:37:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:44.863 06:37:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77131 00:12:44.863 killing process with pid 77131 00:12:44.863 06:37:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:44.863 06:37:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:44.863 06:37:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77131' 00:12:44.863 06:37:24 -- common/autotest_common.sh@945 -- # kill 77131 00:12:44.863 06:37:24 -- common/autotest_common.sh@950 -- # wait 77131 00:12:44.863 06:37:24 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:12:44.863 06:37:24 -- target/tls.sh@227 -- # cleanup 00:12:44.863 06:37:24 -- target/tls.sh@15 -- # process_shm --id 0 00:12:44.863 06:37:24 -- common/autotest_common.sh@796 -- # type=--id 00:12:44.863 06:37:24 -- common/autotest_common.sh@797 -- # id=0 00:12:44.863 06:37:24 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:12:44.863 06:37:24 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:44.863 06:37:24 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:12:44.863 06:37:24 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:12:44.863 06:37:24 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:12:44.863 06:37:24 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:44.863 nvmf_trace.0 00:12:45.123 06:37:24 -- common/autotest_common.sh@811 -- # return 0 00:12:45.123 06:37:24 -- target/tls.sh@16 -- # killprocess 77162 00:12:45.123 06:37:24 -- common/autotest_common.sh@926 -- # '[' -z 77162 ']' 00:12:45.123 06:37:24 -- common/autotest_common.sh@930 -- # kill -0 77162 00:12:45.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77162) - No such process 00:12:45.123 Process with pid 77162 is not found 00:12:45.123 06:37:24 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77162 is not found' 00:12:45.123 06:37:24 -- target/tls.sh@17 -- # nvmftestfini 00:12:45.123 06:37:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:45.123 06:37:24 -- nvmf/common.sh@116 -- # sync 00:12:45.123 06:37:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:45.123 06:37:24 -- nvmf/common.sh@119 -- # set +e 00:12:45.123 06:37:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:45.123 06:37:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:45.123 rmmod nvme_tcp 00:12:45.123 rmmod nvme_fabrics 00:12:45.123 rmmod nvme_keyring 00:12:45.123 06:37:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:45.123 Process with pid 77131 is not found 00:12:45.123 06:37:24 -- nvmf/common.sh@123 -- # set -e 00:12:45.123 06:37:24 -- nvmf/common.sh@124 -- # return 0 00:12:45.123 06:37:24 -- nvmf/common.sh@477 -- # '[' -n 77131 ']' 00:12:45.123 06:37:24 -- nvmf/common.sh@478 -- # killprocess 77131 00:12:45.123 06:37:24 -- common/autotest_common.sh@926 -- # '[' -z 77131 ']' 00:12:45.123 06:37:24 -- common/autotest_common.sh@930 -- # kill -0 77131 00:12:45.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77131) - No such process 00:12:45.123 06:37:24 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77131 is not found' 00:12:45.123 06:37:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:45.123 06:37:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:45.123 06:37:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:45.123 06:37:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:45.123 06:37:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:45.123 06:37:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.123 06:37:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.123 06:37:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.123 06:37:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:45.123 06:37:24 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:45.123 ************************************ 00:12:45.123 END TEST nvmf_tls 00:12:45.123 ************************************ 00:12:45.123 00:12:45.123 real 1m10.710s 00:12:45.123 user 1m50.336s 00:12:45.123 sys 0m23.774s 00:12:45.123 06:37:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.123 06:37:24 -- common/autotest_common.sh@10 -- # set +x 00:12:45.123 06:37:24 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:45.123 06:37:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:45.123 06:37:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:45.123 06:37:24 -- common/autotest_common.sh@10 -- # set +x 00:12:45.123 ************************************ 00:12:45.123 START TEST nvmf_fips 00:12:45.123 ************************************ 00:12:45.123 06:37:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:45.123 * Looking for test storage... 00:12:45.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:45.383 06:37:25 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:45.383 06:37:25 -- nvmf/common.sh@7 -- # uname -s 00:12:45.383 06:37:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.383 06:37:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.383 06:37:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.383 06:37:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.383 06:37:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.383 06:37:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.383 06:37:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.383 06:37:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.383 06:37:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.383 06:37:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.383 06:37:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:12:45.383 06:37:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:12:45.383 06:37:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.383 06:37:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.383 06:37:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:45.383 06:37:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:45.383 06:37:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.383 06:37:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.383 06:37:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.383 06:37:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.383 06:37:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.383 06:37:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.383 06:37:25 -- paths/export.sh@5 -- # export PATH 00:12:45.383 06:37:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.383 06:37:25 -- nvmf/common.sh@46 -- # : 0 00:12:45.383 06:37:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:45.383 06:37:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:45.383 06:37:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:45.383 06:37:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.383 06:37:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.383 06:37:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:45.383 06:37:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:45.383 06:37:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:45.383 06:37:25 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:45.383 06:37:25 -- fips/fips.sh@89 -- # check_openssl_version 00:12:45.383 06:37:25 -- fips/fips.sh@83 -- # local target=3.0.0 00:12:45.383 06:37:25 -- fips/fips.sh@85 -- # awk '{print $2}' 00:12:45.383 06:37:25 -- fips/fips.sh@85 -- # openssl version 00:12:45.383 06:37:25 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:12:45.383 06:37:25 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:12:45.383 06:37:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:45.383 06:37:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:45.383 06:37:25 -- scripts/common.sh@335 -- # IFS=.-: 00:12:45.383 06:37:25 -- scripts/common.sh@335 -- # read -ra ver1 00:12:45.383 06:37:25 -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.383 06:37:25 -- scripts/common.sh@336 -- # read -ra ver2 00:12:45.383 06:37:25 -- scripts/common.sh@337 -- # local 'op=>=' 00:12:45.383 06:37:25 -- scripts/common.sh@339 -- # ver1_l=3 00:12:45.383 06:37:25 -- scripts/common.sh@340 -- # ver2_l=3 00:12:45.383 06:37:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:45.383 06:37:25 -- scripts/common.sh@343 -- # case "$op" in 00:12:45.383 06:37:25 -- scripts/common.sh@347 -- # : 1 00:12:45.383 06:37:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:45.383 06:37:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.383 06:37:25 -- scripts/common.sh@364 -- # decimal 3 00:12:45.383 06:37:25 -- scripts/common.sh@352 -- # local d=3 00:12:45.383 06:37:25 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:45.383 06:37:25 -- scripts/common.sh@354 -- # echo 3 00:12:45.384 06:37:25 -- scripts/common.sh@364 -- # ver1[v]=3 00:12:45.384 06:37:25 -- scripts/common.sh@365 -- # decimal 3 00:12:45.384 06:37:25 -- scripts/common.sh@352 -- # local d=3 00:12:45.384 06:37:25 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:45.384 06:37:25 -- scripts/common.sh@354 -- # echo 3 00:12:45.384 06:37:25 -- scripts/common.sh@365 -- # ver2[v]=3 00:12:45.384 06:37:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:45.384 06:37:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:45.384 06:37:25 -- scripts/common.sh@363 -- # (( v++ )) 00:12:45.384 06:37:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.384 06:37:25 -- scripts/common.sh@364 -- # decimal 0 00:12:45.384 06:37:25 -- scripts/common.sh@352 -- # local d=0 00:12:45.384 06:37:25 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:45.384 06:37:25 -- scripts/common.sh@354 -- # echo 0 00:12:45.384 06:37:25 -- scripts/common.sh@364 -- # ver1[v]=0 00:12:45.384 06:37:25 -- scripts/common.sh@365 -- # decimal 0 00:12:45.384 06:37:25 -- scripts/common.sh@352 -- # local d=0 00:12:45.384 06:37:25 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:45.384 06:37:25 -- scripts/common.sh@354 -- # echo 0 00:12:45.384 06:37:25 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:45.384 06:37:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:45.384 06:37:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:45.384 06:37:25 -- scripts/common.sh@363 -- # (( v++ )) 00:12:45.384 06:37:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.384 06:37:25 -- scripts/common.sh@364 -- # decimal 9 00:12:45.384 06:37:25 -- scripts/common.sh@352 -- # local d=9 00:12:45.384 06:37:25 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:12:45.384 06:37:25 -- scripts/common.sh@354 -- # echo 9 00:12:45.384 06:37:25 -- scripts/common.sh@364 -- # ver1[v]=9 00:12:45.384 06:37:25 -- scripts/common.sh@365 -- # decimal 0 00:12:45.384 06:37:25 -- scripts/common.sh@352 -- # local d=0 00:12:45.384 06:37:25 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:45.384 06:37:25 -- scripts/common.sh@354 -- # echo 0 00:12:45.384 06:37:25 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:45.384 06:37:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:45.384 06:37:25 -- scripts/common.sh@366 -- # return 0 00:12:45.384 06:37:25 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:12:45.384 06:37:25 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:45.384 06:37:25 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:12:45.384 06:37:25 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:45.384 06:37:25 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:45.384 06:37:25 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:12:45.384 06:37:25 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:12:45.384 06:37:25 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:12:45.384 06:37:25 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:12:45.384 06:37:25 -- fips/fips.sh@114 -- # build_openssl_config 00:12:45.384 06:37:25 -- fips/fips.sh@37 -- # cat 00:12:45.384 06:37:25 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:12:45.384 06:37:25 -- fips/fips.sh@58 -- # cat - 00:12:45.384 06:37:25 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:45.384 06:37:25 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:12:45.384 06:37:25 -- fips/fips.sh@117 -- # mapfile -t providers 00:12:45.384 06:37:25 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:12:45.384 06:37:25 -- fips/fips.sh@117 -- # openssl list -providers 00:12:45.384 06:37:25 -- fips/fips.sh@117 -- # grep name 00:12:45.384 06:37:25 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:12:45.384 06:37:25 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:12:45.384 06:37:25 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:45.384 06:37:25 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:12:45.384 06:37:25 -- fips/fips.sh@128 -- # : 00:12:45.384 06:37:25 -- common/autotest_common.sh@640 -- # local es=0 00:12:45.384 06:37:25 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:45.384 06:37:25 -- common/autotest_common.sh@628 -- # local arg=openssl 00:12:45.384 06:37:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:45.384 06:37:25 -- common/autotest_common.sh@632 -- # type -t openssl 00:12:45.384 06:37:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:45.384 06:37:25 -- common/autotest_common.sh@634 -- # type -P openssl 00:12:45.384 06:37:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:45.384 06:37:25 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:12:45.384 06:37:25 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:12:45.384 06:37:25 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:12:45.384 Error setting digest 00:12:45.384 00A2D9CAFA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:12:45.384 00A2D9CAFA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:12:45.384 06:37:25 -- common/autotest_common.sh@643 -- # es=1 00:12:45.384 06:37:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:45.384 06:37:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:45.384 06:37:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:45.384 06:37:25 -- fips/fips.sh@131 -- # nvmftestinit 00:12:45.384 06:37:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:45.384 06:37:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.384 06:37:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:45.384 06:37:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:45.384 06:37:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:45.384 06:37:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.384 06:37:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.384 06:37:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.384 06:37:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:45.384 06:37:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:45.384 06:37:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:45.384 06:37:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:45.384 06:37:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:45.384 06:37:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:45.384 06:37:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.384 06:37:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.384 06:37:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:45.384 06:37:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:45.384 06:37:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:45.384 06:37:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:45.384 06:37:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:45.384 06:37:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.384 06:37:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:45.384 06:37:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:45.384 06:37:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:45.384 06:37:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:45.384 06:37:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:45.384 06:37:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:45.384 Cannot find device "nvmf_tgt_br" 00:12:45.384 06:37:25 -- nvmf/common.sh@154 -- # true 00:12:45.384 06:37:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:45.384 Cannot find device "nvmf_tgt_br2" 00:12:45.384 06:37:25 -- nvmf/common.sh@155 -- # true 00:12:45.384 06:37:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:45.384 06:37:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:45.643 Cannot find device "nvmf_tgt_br" 00:12:45.643 06:37:25 -- nvmf/common.sh@157 -- # true 00:12:45.643 06:37:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:45.643 Cannot find device "nvmf_tgt_br2" 00:12:45.643 06:37:25 -- nvmf/common.sh@158 -- # true 00:12:45.643 06:37:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:45.643 06:37:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:45.643 06:37:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:45.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.643 06:37:25 -- nvmf/common.sh@161 -- # true 00:12:45.643 06:37:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:45.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.643 06:37:25 -- nvmf/common.sh@162 -- # true 00:12:45.643 06:37:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:45.643 06:37:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:45.643 06:37:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:45.643 06:37:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:45.643 06:37:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:45.643 06:37:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:45.643 06:37:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:45.643 06:37:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:45.643 06:37:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:45.643 06:37:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:45.643 06:37:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:45.643 06:37:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:45.643 06:37:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:45.643 06:37:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:45.643 06:37:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:45.643 06:37:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:45.643 06:37:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:45.643 06:37:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:45.643 06:37:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:45.643 06:37:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:45.643 06:37:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:45.643 06:37:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:45.903 06:37:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:45.903 06:37:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:45.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:12:45.903 00:12:45.903 --- 10.0.0.2 ping statistics --- 00:12:45.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.903 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:45.903 06:37:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:45.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:45.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:12:45.903 00:12:45.903 --- 10.0.0.3 ping statistics --- 00:12:45.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.903 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:45.903 06:37:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:45.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:45.903 00:12:45.903 --- 10.0.0.1 ping statistics --- 00:12:45.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.903 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:45.903 06:37:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.903 06:37:25 -- nvmf/common.sh@421 -- # return 0 00:12:45.903 06:37:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:45.903 06:37:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.903 06:37:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:45.903 06:37:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:45.903 06:37:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.903 06:37:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:45.903 06:37:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:45.903 06:37:25 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:12:45.903 06:37:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:45.903 06:37:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:45.903 06:37:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.903 06:37:25 -- nvmf/common.sh@469 -- # nvmfpid=77516 00:12:45.903 06:37:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:45.903 06:37:25 -- nvmf/common.sh@470 -- # waitforlisten 77516 00:12:45.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.903 06:37:25 -- common/autotest_common.sh@819 -- # '[' -z 77516 ']' 00:12:45.903 06:37:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.903 06:37:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:45.903 06:37:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.903 06:37:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:45.903 06:37:25 -- common/autotest_common.sh@10 -- # set +x 00:12:45.903 [2024-07-12 06:37:25.689028] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:45.903 [2024-07-12 06:37:25.689141] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.162 [2024-07-12 06:37:25.832628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.162 [2024-07-12 06:37:25.873845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:46.162 [2024-07-12 06:37:25.874385] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.162 [2024-07-12 06:37:25.874449] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.162 [2024-07-12 06:37:25.874470] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.162 [2024-07-12 06:37:25.874519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.099 06:37:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:47.099 06:37:26 -- common/autotest_common.sh@852 -- # return 0 00:12:47.099 06:37:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:47.099 06:37:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:47.099 06:37:26 -- common/autotest_common.sh@10 -- # set +x 00:12:47.099 06:37:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.099 06:37:26 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:12:47.099 06:37:26 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:47.099 06:37:26 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:47.099 06:37:26 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:47.099 06:37:26 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:47.099 06:37:26 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:47.099 06:37:26 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:47.099 06:37:26 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:47.099 [2024-07-12 06:37:26.957909] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.099 [2024-07-12 06:37:26.973859] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:47.099 [2024-07-12 06:37:26.974133] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.099 malloc0 00:12:47.358 06:37:27 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:47.358 06:37:27 -- fips/fips.sh@148 -- # bdevperf_pid=77550 00:12:47.358 06:37:27 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:47.358 06:37:27 -- fips/fips.sh@149 -- # waitforlisten 77550 /var/tmp/bdevperf.sock 00:12:47.358 06:37:27 -- common/autotest_common.sh@819 -- # '[' -z 77550 ']' 00:12:47.358 06:37:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:47.358 06:37:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:47.358 06:37:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:47.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:47.358 06:37:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:47.358 06:37:27 -- common/autotest_common.sh@10 -- # set +x 00:12:47.358 [2024-07-12 06:37:27.110937] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:47.358 [2024-07-12 06:37:27.111307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77550 ] 00:12:47.358 [2024-07-12 06:37:27.271060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.617 [2024-07-12 06:37:27.316103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.185 06:37:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:48.185 06:37:28 -- common/autotest_common.sh@852 -- # return 0 00:12:48.185 06:37:28 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:48.443 [2024-07-12 06:37:28.327458] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:48.702 TLSTESTn1 00:12:48.702 06:37:28 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:48.702 Running I/O for 10 seconds... 00:12:58.677 00:12:58.677 Latency(us) 00:12:58.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.677 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:58.677 Verification LBA range: start 0x0 length 0x2000 00:12:58.677 TLSTESTn1 : 10.01 5549.59 21.68 0.00 0.00 23028.07 4140.68 21328.99 00:12:58.677 =================================================================================================================== 00:12:58.677 Total : 5549.59 21.68 0.00 0.00 23028.07 4140.68 21328.99 00:12:58.677 0 00:12:58.677 06:37:38 -- fips/fips.sh@1 -- # cleanup 00:12:58.677 06:37:38 -- fips/fips.sh@15 -- # process_shm --id 0 00:12:58.677 06:37:38 -- common/autotest_common.sh@796 -- # type=--id 00:12:58.677 06:37:38 -- common/autotest_common.sh@797 -- # id=0 00:12:58.677 06:37:38 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:12:58.677 06:37:38 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:58.677 06:37:38 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:12:58.677 06:37:38 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:12:58.677 06:37:38 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:12:58.677 06:37:38 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:58.677 nvmf_trace.0 00:12:58.936 06:37:38 -- common/autotest_common.sh@811 -- # return 0 00:12:58.936 06:37:38 -- fips/fips.sh@16 -- # killprocess 77550 00:12:58.936 06:37:38 -- common/autotest_common.sh@926 -- # '[' -z 77550 ']' 00:12:58.936 06:37:38 -- common/autotest_common.sh@930 -- # kill -0 77550 00:12:58.936 06:37:38 -- common/autotest_common.sh@931 -- # uname 00:12:58.936 06:37:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:58.936 06:37:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77550 00:12:58.936 killing process with pid 77550 00:12:58.936 Received shutdown signal, test time was about 10.000000 seconds 00:12:58.936 00:12:58.936 Latency(us) 00:12:58.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.936 =================================================================================================================== 00:12:58.936 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:58.936 06:37:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:58.936 06:37:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:58.936 06:37:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77550' 00:12:58.936 06:37:38 -- common/autotest_common.sh@945 -- # kill 77550 00:12:58.936 06:37:38 -- common/autotest_common.sh@950 -- # wait 77550 00:12:58.936 06:37:38 -- fips/fips.sh@17 -- # nvmftestfini 00:12:58.936 06:37:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:58.936 06:37:38 -- nvmf/common.sh@116 -- # sync 00:12:59.195 06:37:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:59.195 06:37:38 -- nvmf/common.sh@119 -- # set +e 00:12:59.196 06:37:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:59.196 06:37:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:59.196 rmmod nvme_tcp 00:12:59.196 rmmod nvme_fabrics 00:12:59.196 rmmod nvme_keyring 00:12:59.196 06:37:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:59.196 06:37:38 -- nvmf/common.sh@123 -- # set -e 00:12:59.196 06:37:38 -- nvmf/common.sh@124 -- # return 0 00:12:59.196 06:37:38 -- nvmf/common.sh@477 -- # '[' -n 77516 ']' 00:12:59.196 06:37:38 -- nvmf/common.sh@478 -- # killprocess 77516 00:12:59.196 06:37:38 -- common/autotest_common.sh@926 -- # '[' -z 77516 ']' 00:12:59.196 06:37:38 -- common/autotest_common.sh@930 -- # kill -0 77516 00:12:59.196 06:37:38 -- common/autotest_common.sh@931 -- # uname 00:12:59.196 06:37:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:59.196 06:37:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77516 00:12:59.196 killing process with pid 77516 00:12:59.196 06:37:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:59.196 06:37:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:59.196 06:37:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77516' 00:12:59.196 06:37:38 -- common/autotest_common.sh@945 -- # kill 77516 00:12:59.196 06:37:38 -- common/autotest_common.sh@950 -- # wait 77516 00:12:59.455 06:37:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:59.455 06:37:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:59.455 06:37:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:59.455 06:37:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.455 06:37:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:59.455 06:37:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.455 06:37:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.455 06:37:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.455 06:37:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:59.455 06:37:39 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:59.455 00:12:59.455 real 0m14.189s 00:12:59.455 user 0m19.303s 00:12:59.455 sys 0m5.763s 00:12:59.455 ************************************ 00:12:59.455 END TEST nvmf_fips 00:12:59.455 ************************************ 00:12:59.455 06:37:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.455 06:37:39 -- common/autotest_common.sh@10 -- # set +x 00:12:59.455 06:37:39 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:12:59.455 06:37:39 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:59.455 06:37:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:59.455 06:37:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:59.455 06:37:39 -- common/autotest_common.sh@10 -- # set +x 00:12:59.455 ************************************ 00:12:59.455 START TEST nvmf_fuzz 00:12:59.455 ************************************ 00:12:59.455 06:37:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:59.455 * Looking for test storage... 00:12:59.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:59.455 06:37:39 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:59.455 06:37:39 -- nvmf/common.sh@7 -- # uname -s 00:12:59.455 06:37:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.455 06:37:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.455 06:37:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.455 06:37:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.455 06:37:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.455 06:37:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.456 06:37:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.456 06:37:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.456 06:37:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.456 06:37:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.456 06:37:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:12:59.456 06:37:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:12:59.456 06:37:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.456 06:37:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.456 06:37:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:59.456 06:37:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:59.456 06:37:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.456 06:37:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.456 06:37:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.456 06:37:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.456 06:37:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.456 06:37:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.456 06:37:39 -- paths/export.sh@5 -- # export PATH 00:12:59.456 06:37:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.456 06:37:39 -- nvmf/common.sh@46 -- # : 0 00:12:59.456 06:37:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:59.456 06:37:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:59.456 06:37:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:59.456 06:37:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.456 06:37:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.456 06:37:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:59.456 06:37:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:59.456 06:37:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:59.456 06:37:39 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:12:59.456 06:37:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:59.456 06:37:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.456 06:37:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:59.456 06:37:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:59.456 06:37:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:59.456 06:37:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.456 06:37:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.456 06:37:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.456 06:37:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:59.456 06:37:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:59.456 06:37:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:59.456 06:37:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:59.456 06:37:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:59.456 06:37:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:59.456 06:37:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.456 06:37:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.456 06:37:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:59.456 06:37:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:59.456 06:37:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:59.456 06:37:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:59.456 06:37:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:59.456 06:37:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.456 06:37:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:59.456 06:37:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:59.456 06:37:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:59.456 06:37:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:59.456 06:37:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:59.456 06:37:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:59.456 Cannot find device "nvmf_tgt_br" 00:12:59.456 06:37:39 -- nvmf/common.sh@154 -- # true 00:12:59.456 06:37:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:59.456 Cannot find device "nvmf_tgt_br2" 00:12:59.456 06:37:39 -- nvmf/common.sh@155 -- # true 00:12:59.456 06:37:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:59.456 06:37:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:59.456 Cannot find device "nvmf_tgt_br" 00:12:59.456 06:37:39 -- nvmf/common.sh@157 -- # true 00:12:59.456 06:37:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:59.715 Cannot find device "nvmf_tgt_br2" 00:12:59.715 06:37:39 -- nvmf/common.sh@158 -- # true 00:12:59.715 06:37:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:59.715 06:37:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:59.715 06:37:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:59.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.715 06:37:39 -- nvmf/common.sh@161 -- # true 00:12:59.715 06:37:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:59.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.715 06:37:39 -- nvmf/common.sh@162 -- # true 00:12:59.715 06:37:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:59.715 06:37:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:59.715 06:37:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:59.715 06:37:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:59.715 06:37:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:59.715 06:37:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:59.715 06:37:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:59.715 06:37:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:59.715 06:37:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:59.715 06:37:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:59.715 06:37:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:59.715 06:37:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:59.715 06:37:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:59.715 06:37:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:59.715 06:37:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:59.715 06:37:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:59.715 06:37:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:59.715 06:37:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:59.715 06:37:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:59.715 06:37:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:59.715 06:37:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:59.715 06:37:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:59.715 06:37:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:59.988 06:37:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:59.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:12:59.988 00:12:59.988 --- 10.0.0.2 ping statistics --- 00:12:59.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.988 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:59.988 06:37:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:59.988 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:59.988 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:59.988 00:12:59.988 --- 10.0.0.3 ping statistics --- 00:12:59.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.988 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:59.988 06:37:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:59.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:12:59.988 00:12:59.988 --- 10.0.0.1 ping statistics --- 00:12:59.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.988 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:59.988 06:37:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.988 06:37:39 -- nvmf/common.sh@421 -- # return 0 00:12:59.988 06:37:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:59.988 06:37:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.988 06:37:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:59.988 06:37:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:59.988 06:37:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.988 06:37:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:59.988 06:37:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:59.988 06:37:39 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77873 00:12:59.988 06:37:39 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:59.988 06:37:39 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:59.988 06:37:39 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77873 00:12:59.988 06:37:39 -- common/autotest_common.sh@819 -- # '[' -z 77873 ']' 00:12:59.988 06:37:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.988 06:37:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:59.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.988 06:37:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.988 06:37:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:59.988 06:37:39 -- common/autotest_common.sh@10 -- # set +x 00:13:00.920 06:37:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:00.920 06:37:40 -- common/autotest_common.sh@852 -- # return 0 00:13:00.920 06:37:40 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.920 06:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.920 06:37:40 -- common/autotest_common.sh@10 -- # set +x 00:13:00.920 06:37:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.920 06:37:40 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:13:00.920 06:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.920 06:37:40 -- common/autotest_common.sh@10 -- # set +x 00:13:00.920 Malloc0 00:13:00.920 06:37:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.920 06:37:40 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:00.920 06:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.920 06:37:40 -- common/autotest_common.sh@10 -- # set +x 00:13:00.920 06:37:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.920 06:37:40 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:00.920 06:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.920 06:37:40 -- common/autotest_common.sh@10 -- # set +x 00:13:00.920 06:37:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.920 06:37:40 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.920 06:37:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.920 06:37:40 -- common/autotest_common.sh@10 -- # set +x 00:13:00.920 06:37:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.920 06:37:40 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:13:00.920 06:37:40 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:13:01.177 Shutting down the fuzz application 00:13:01.177 06:37:41 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:13:01.435 Shutting down the fuzz application 00:13:01.435 06:37:41 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.435 06:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.435 06:37:41 -- common/autotest_common.sh@10 -- # set +x 00:13:01.435 06:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.435 06:37:41 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:01.435 06:37:41 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:13:01.435 06:37:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:01.435 06:37:41 -- nvmf/common.sh@116 -- # sync 00:13:01.694 06:37:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:01.694 06:37:41 -- nvmf/common.sh@119 -- # set +e 00:13:01.694 06:37:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:01.694 06:37:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:01.694 rmmod nvme_tcp 00:13:01.694 rmmod nvme_fabrics 00:13:01.694 rmmod nvme_keyring 00:13:01.694 06:37:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:01.694 06:37:41 -- nvmf/common.sh@123 -- # set -e 00:13:01.694 06:37:41 -- nvmf/common.sh@124 -- # return 0 00:13:01.694 06:37:41 -- nvmf/common.sh@477 -- # '[' -n 77873 ']' 00:13:01.694 06:37:41 -- nvmf/common.sh@478 -- # killprocess 77873 00:13:01.694 06:37:41 -- common/autotest_common.sh@926 -- # '[' -z 77873 ']' 00:13:01.694 06:37:41 -- common/autotest_common.sh@930 -- # kill -0 77873 00:13:01.694 06:37:41 -- common/autotest_common.sh@931 -- # uname 00:13:01.694 06:37:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:01.694 06:37:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77873 00:13:01.694 06:37:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:01.694 06:37:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:01.694 killing process with pid 77873 00:13:01.694 06:37:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77873' 00:13:01.694 06:37:41 -- common/autotest_common.sh@945 -- # kill 77873 00:13:01.694 06:37:41 -- common/autotest_common.sh@950 -- # wait 77873 00:13:01.694 06:37:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:01.694 06:37:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:01.694 06:37:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:01.694 06:37:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:01.694 06:37:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:01.694 06:37:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.694 06:37:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.694 06:37:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.952 06:37:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:01.952 06:37:41 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:13:01.952 00:13:01.952 real 0m2.454s 00:13:01.952 user 0m2.529s 00:13:01.952 sys 0m0.552s 00:13:01.952 06:37:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.952 06:37:41 -- common/autotest_common.sh@10 -- # set +x 00:13:01.952 ************************************ 00:13:01.952 END TEST nvmf_fuzz 00:13:01.952 ************************************ 00:13:01.952 06:37:41 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:01.952 06:37:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:01.952 06:37:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:01.952 06:37:41 -- common/autotest_common.sh@10 -- # set +x 00:13:01.952 ************************************ 00:13:01.952 START TEST nvmf_multiconnection 00:13:01.952 ************************************ 00:13:01.952 06:37:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:01.952 * Looking for test storage... 00:13:01.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:01.952 06:37:41 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:01.952 06:37:41 -- nvmf/common.sh@7 -- # uname -s 00:13:01.952 06:37:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.952 06:37:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.952 06:37:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.952 06:37:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.952 06:37:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.952 06:37:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.952 06:37:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.952 06:37:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.952 06:37:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.952 06:37:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.952 06:37:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:13:01.952 06:37:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:13:01.952 06:37:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.952 06:37:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.952 06:37:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:01.952 06:37:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:01.952 06:37:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.952 06:37:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.952 06:37:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.952 06:37:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.952 06:37:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.952 06:37:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.952 06:37:41 -- paths/export.sh@5 -- # export PATH 00:13:01.952 06:37:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.952 06:37:41 -- nvmf/common.sh@46 -- # : 0 00:13:01.952 06:37:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:01.952 06:37:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:01.952 06:37:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:01.952 06:37:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.952 06:37:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.952 06:37:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:01.952 06:37:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:01.952 06:37:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:01.952 06:37:41 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:01.952 06:37:41 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:01.952 06:37:41 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:13:01.952 06:37:41 -- target/multiconnection.sh@16 -- # nvmftestinit 00:13:01.952 06:37:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:01.952 06:37:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.952 06:37:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:01.952 06:37:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:01.952 06:37:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:01.952 06:37:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.952 06:37:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.952 06:37:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.952 06:37:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:01.952 06:37:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:01.952 06:37:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:01.952 06:37:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:01.952 06:37:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:01.952 06:37:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:01.952 06:37:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.952 06:37:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.952 06:37:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:01.952 06:37:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:01.952 06:37:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:01.952 06:37:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:01.952 06:37:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:01.952 06:37:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.952 06:37:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:01.952 06:37:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:01.952 06:37:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:01.952 06:37:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:01.952 06:37:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:01.952 06:37:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:01.952 Cannot find device "nvmf_tgt_br" 00:13:01.952 06:37:41 -- nvmf/common.sh@154 -- # true 00:13:01.952 06:37:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:01.952 Cannot find device "nvmf_tgt_br2" 00:13:01.952 06:37:41 -- nvmf/common.sh@155 -- # true 00:13:01.952 06:37:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:01.952 06:37:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:01.952 Cannot find device "nvmf_tgt_br" 00:13:01.952 06:37:41 -- nvmf/common.sh@157 -- # true 00:13:01.952 06:37:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:02.210 Cannot find device "nvmf_tgt_br2" 00:13:02.210 06:37:41 -- nvmf/common.sh@158 -- # true 00:13:02.210 06:37:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:02.210 06:37:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:02.210 06:37:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:02.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.210 06:37:41 -- nvmf/common.sh@161 -- # true 00:13:02.210 06:37:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:02.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.210 06:37:41 -- nvmf/common.sh@162 -- # true 00:13:02.210 06:37:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:02.210 06:37:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:02.210 06:37:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:02.210 06:37:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:02.210 06:37:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:02.210 06:37:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:02.210 06:37:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:02.210 06:37:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:02.210 06:37:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:02.210 06:37:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:02.210 06:37:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:02.210 06:37:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:02.210 06:37:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:02.210 06:37:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:02.210 06:37:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:02.210 06:37:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:02.210 06:37:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:02.210 06:37:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:02.210 06:37:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:02.210 06:37:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:02.210 06:37:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:02.469 06:37:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:02.469 06:37:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:02.469 06:37:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:02.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:13:02.469 00:13:02.469 --- 10.0.0.2 ping statistics --- 00:13:02.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.469 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:13:02.469 06:37:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:02.469 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:02.469 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:13:02.469 00:13:02.469 --- 10.0.0.3 ping statistics --- 00:13:02.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.469 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:02.469 06:37:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:02.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:02.469 00:13:02.469 --- 10.0.0.1 ping statistics --- 00:13:02.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.469 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:02.469 06:37:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.469 06:37:42 -- nvmf/common.sh@421 -- # return 0 00:13:02.469 06:37:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:02.469 06:37:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.469 06:37:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:02.469 06:37:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:02.469 06:37:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.469 06:37:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:02.469 06:37:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:02.469 06:37:42 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:13:02.469 06:37:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:02.469 06:37:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:02.469 06:37:42 -- common/autotest_common.sh@10 -- # set +x 00:13:02.469 06:37:42 -- nvmf/common.sh@469 -- # nvmfpid=78066 00:13:02.469 06:37:42 -- nvmf/common.sh@470 -- # waitforlisten 78066 00:13:02.469 06:37:42 -- common/autotest_common.sh@819 -- # '[' -z 78066 ']' 00:13:02.469 06:37:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.469 06:37:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.469 06:37:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:02.469 06:37:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.469 06:37:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:02.469 06:37:42 -- common/autotest_common.sh@10 -- # set +x 00:13:02.469 [2024-07-12 06:37:42.234204] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:02.469 [2024-07-12 06:37:42.234283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.469 [2024-07-12 06:37:42.375312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.726 [2024-07-12 06:37:42.419235] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:02.726 [2024-07-12 06:37:42.419671] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.726 [2024-07-12 06:37:42.419894] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.726 [2024-07-12 06:37:42.420113] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.726 [2024-07-12 06:37:42.420384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.726 [2024-07-12 06:37:42.420441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.726 [2024-07-12 06:37:42.420637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.726 [2024-07-12 06:37:42.420660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.660 06:37:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:03.660 06:37:43 -- common/autotest_common.sh@852 -- # return 0 00:13:03.660 06:37:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:03.660 06:37:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 06:37:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.660 06:37:43 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 [2024-07-12 06:37:43.271942] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@21 -- # seq 1 11 00:13:03.660 06:37:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.660 06:37:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 Malloc1 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 [2024-07-12 06:37:43.338741] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.660 06:37:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 Malloc2 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.660 06:37:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 Malloc3 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.660 06:37:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.660 06:37:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:13:03.660 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.660 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.660 Malloc4 00:13:03.660 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.661 06:37:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 Malloc5 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.661 06:37:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 Malloc6 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.661 06:37:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 Malloc7 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.661 06:37:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.661 06:37:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:13:03.661 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.661 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 Malloc8 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.920 06:37:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 Malloc9 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.920 06:37:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 Malloc10 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.920 06:37:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 Malloc11 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:13:03.920 06:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.920 06:37:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 06:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.920 06:37:43 -- target/multiconnection.sh@28 -- # seq 1 11 00:13:03.920 06:37:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.920 06:37:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.178 06:37:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:13:04.178 06:37:43 -- common/autotest_common.sh@1177 -- # local i=0 00:13:04.178 06:37:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.178 06:37:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:04.178 06:37:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:06.119 06:37:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:06.119 06:37:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:06.119 06:37:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:13:06.119 06:37:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:06.119 06:37:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.119 06:37:45 -- common/autotest_common.sh@1187 -- # return 0 00:13:06.119 06:37:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.119 06:37:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:13:06.119 06:37:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:13:06.119 06:37:46 -- common/autotest_common.sh@1177 -- # local i=0 00:13:06.119 06:37:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.119 06:37:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:06.119 06:37:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:08.650 06:37:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:08.650 06:37:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:08.650 06:37:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:13:08.650 06:37:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:08.650 06:37:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.650 06:37:48 -- common/autotest_common.sh@1187 -- # return 0 00:13:08.650 06:37:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.650 06:37:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:13:08.650 06:37:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:13:08.650 06:37:48 -- common/autotest_common.sh@1177 -- # local i=0 00:13:08.650 06:37:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.650 06:37:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:08.650 06:37:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:10.554 06:37:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:10.554 06:37:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:10.554 06:37:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:13:10.554 06:37:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:10.554 06:37:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.554 06:37:50 -- common/autotest_common.sh@1187 -- # return 0 00:13:10.554 06:37:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:10.554 06:37:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:13:10.554 06:37:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:13:10.554 06:37:50 -- common/autotest_common.sh@1177 -- # local i=0 00:13:10.554 06:37:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.554 06:37:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:10.554 06:37:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:12.455 06:37:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:12.455 06:37:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:12.455 06:37:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:13:12.455 06:37:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:12.455 06:37:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.455 06:37:52 -- common/autotest_common.sh@1187 -- # return 0 00:13:12.455 06:37:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:12.455 06:37:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:13:12.713 06:37:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:13:12.713 06:37:52 -- common/autotest_common.sh@1177 -- # local i=0 00:13:12.713 06:37:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.713 06:37:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:12.713 06:37:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:14.612 06:37:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:14.612 06:37:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:14.612 06:37:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:13:14.612 06:37:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:14.612 06:37:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.612 06:37:54 -- common/autotest_common.sh@1187 -- # return 0 00:13:14.612 06:37:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:14.612 06:37:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:13:14.870 06:37:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:13:14.870 06:37:54 -- common/autotest_common.sh@1177 -- # local i=0 00:13:14.870 06:37:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.870 06:37:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:14.870 06:37:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:16.767 06:37:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:16.767 06:37:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:16.767 06:37:56 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:13:16.767 06:37:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:16.767 06:37:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.767 06:37:56 -- common/autotest_common.sh@1187 -- # return 0 00:13:16.767 06:37:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:16.767 06:37:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:13:17.024 06:37:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:13:17.025 06:37:56 -- common/autotest_common.sh@1177 -- # local i=0 00:13:17.025 06:37:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.025 06:37:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:17.025 06:37:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:18.925 06:37:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:18.925 06:37:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:13:18.925 06:37:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:18.925 06:37:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:18.925 06:37:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.925 06:37:58 -- common/autotest_common.sh@1187 -- # return 0 00:13:18.925 06:37:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:18.925 06:37:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:13:19.182 06:37:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:13:19.182 06:37:58 -- common/autotest_common.sh@1177 -- # local i=0 00:13:19.182 06:37:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.182 06:37:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:19.182 06:37:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:21.085 06:38:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:21.085 06:38:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:21.085 06:38:00 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:13:21.085 06:38:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:21.085 06:38:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.085 06:38:00 -- common/autotest_common.sh@1187 -- # return 0 00:13:21.085 06:38:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.085 06:38:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:13:21.347 06:38:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:13:21.347 06:38:01 -- common/autotest_common.sh@1177 -- # local i=0 00:13:21.347 06:38:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.347 06:38:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:21.347 06:38:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:23.266 06:38:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:23.266 06:38:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:23.266 06:38:03 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:13:23.266 06:38:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:23.266 06:38:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.266 06:38:03 -- common/autotest_common.sh@1187 -- # return 0 00:13:23.266 06:38:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.266 06:38:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:13:23.523 06:38:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:13:23.523 06:38:03 -- common/autotest_common.sh@1177 -- # local i=0 00:13:23.523 06:38:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.523 06:38:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:23.523 06:38:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:25.421 06:38:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:25.421 06:38:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:25.421 06:38:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:13:25.421 06:38:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:25.421 06:38:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.421 06:38:05 -- common/autotest_common.sh@1187 -- # return 0 00:13:25.421 06:38:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:25.421 06:38:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:13:25.678 06:38:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:13:25.678 06:38:05 -- common/autotest_common.sh@1177 -- # local i=0 00:13:25.678 06:38:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.678 06:38:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:25.678 06:38:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:27.579 06:38:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:27.579 06:38:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:27.579 06:38:07 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:13:27.579 06:38:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:27.579 06:38:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.579 06:38:07 -- common/autotest_common.sh@1187 -- # return 0 00:13:27.579 06:38:07 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:13:27.579 [global] 00:13:27.579 thread=1 00:13:27.579 invalidate=1 00:13:27.579 rw=read 00:13:27.579 time_based=1 00:13:27.579 runtime=10 00:13:27.579 ioengine=libaio 00:13:27.579 direct=1 00:13:27.579 bs=262144 00:13:27.579 iodepth=64 00:13:27.579 norandommap=1 00:13:27.579 numjobs=1 00:13:27.579 00:13:27.579 [job0] 00:13:27.579 filename=/dev/nvme0n1 00:13:27.579 [job1] 00:13:27.579 filename=/dev/nvme10n1 00:13:27.579 [job2] 00:13:27.579 filename=/dev/nvme1n1 00:13:27.837 [job3] 00:13:27.837 filename=/dev/nvme2n1 00:13:27.837 [job4] 00:13:27.837 filename=/dev/nvme3n1 00:13:27.837 [job5] 00:13:27.837 filename=/dev/nvme4n1 00:13:27.837 [job6] 00:13:27.837 filename=/dev/nvme5n1 00:13:27.837 [job7] 00:13:27.837 filename=/dev/nvme6n1 00:13:27.837 [job8] 00:13:27.837 filename=/dev/nvme7n1 00:13:27.837 [job9] 00:13:27.837 filename=/dev/nvme8n1 00:13:27.837 [job10] 00:13:27.837 filename=/dev/nvme9n1 00:13:27.837 Could not set queue depth (nvme0n1) 00:13:27.837 Could not set queue depth (nvme10n1) 00:13:27.837 Could not set queue depth (nvme1n1) 00:13:27.837 Could not set queue depth (nvme2n1) 00:13:27.837 Could not set queue depth (nvme3n1) 00:13:27.837 Could not set queue depth (nvme4n1) 00:13:27.837 Could not set queue depth (nvme5n1) 00:13:27.837 Could not set queue depth (nvme6n1) 00:13:27.837 Could not set queue depth (nvme7n1) 00:13:27.837 Could not set queue depth (nvme8n1) 00:13:27.837 Could not set queue depth (nvme9n1) 00:13:27.838 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:27.838 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:27.838 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:27.838 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:27.838 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:27.838 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:27.838 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:27.838 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:27.838 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:27.838 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:27.838 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:27.838 fio-3.35 00:13:27.838 Starting 11 threads 00:13:40.042 00:13:40.042 job0: (groupid=0, jobs=1): err= 0: pid=78524: Fri Jul 12 06:38:18 2024 00:13:40.042 read: IOPS=696, BW=174MiB/s (183MB/s)(1751MiB/10054msec) 00:13:40.042 slat (usec): min=21, max=28919, avg=1422.66, stdev=3153.20 00:13:40.042 clat (msec): min=45, max=150, avg=90.32, stdev= 9.00 00:13:40.042 lat (msec): min=45, max=150, avg=91.75, stdev= 9.03 00:13:40.043 clat percentiles (msec): 00:13:40.043 | 1.00th=[ 67], 5.00th=[ 77], 10.00th=[ 81], 20.00th=[ 84], 00:13:40.043 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 93], 00:13:40.043 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 102], 95.00th=[ 104], 00:13:40.043 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 133], 99.95th=[ 144], 00:13:40.043 | 99.99th=[ 150] 00:13:40.043 bw ( KiB/s): min=169984, max=185344, per=8.60%, avg=177715.20, stdev=4576.15, samples=20 00:13:40.043 iops : min= 664, max= 724, avg=694.20, stdev=17.88, samples=20 00:13:40.043 lat (msec) : 50=0.26%, 100=87.62%, 250=12.12% 00:13:40.043 cpu : usr=0.42%, sys=3.03%, ctx=1540, majf=0, minf=4097 00:13:40.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:40.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:40.043 issued rwts: total=7005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.043 job1: (groupid=0, jobs=1): err= 0: pid=78525: Fri Jul 12 06:38:18 2024 00:13:40.043 read: IOPS=1934, BW=484MiB/s (507MB/s)(4843MiB/10011msec) 00:13:40.043 slat (usec): min=16, max=8158, avg=512.76, stdev=1062.44 00:13:40.043 clat (usec): min=8125, max=56964, avg=32533.12, stdev=2256.43 00:13:40.043 lat (usec): min=10654, max=56995, avg=33045.88, stdev=2252.47 00:13:40.043 clat percentiles (usec): 00:13:40.043 | 1.00th=[27657], 5.00th=[29492], 10.00th=[30278], 20.00th=[31065], 00:13:40.043 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32637], 60.00th=[32900], 00:13:40.043 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:13:40.043 | 99.00th=[37487], 99.50th=[39060], 99.90th=[50594], 99.95th=[53216], 00:13:40.043 | 99.99th=[56886] 00:13:40.043 bw ( KiB/s): min=459776, max=506368, per=23.92%, avg=494206.60, stdev=10285.01, samples=20 00:13:40.043 iops : min= 1796, max= 1978, avg=1930.40, stdev=40.18, samples=20 00:13:40.043 lat (msec) : 10=0.01%, 20=0.27%, 50=99.59%, 100=0.13% 00:13:40.043 cpu : usr=0.70%, sys=5.34%, ctx=4076, majf=0, minf=4097 00:13:40.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:40.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:40.043 issued rwts: total=19370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.043 job2: (groupid=0, jobs=1): err= 0: pid=78526: Fri Jul 12 06:38:18 2024 00:13:40.043 read: IOPS=518, BW=130MiB/s (136MB/s)(1308MiB/10096msec) 00:13:40.043 slat (usec): min=17, max=28170, avg=1905.96, stdev=4119.00 00:13:40.043 clat (msec): min=22, max=206, avg=121.37, stdev=10.90 00:13:40.043 lat (msec): min=22, max=206, avg=123.28, stdev=11.08 00:13:40.043 clat percentiles (msec): 00:13:40.043 | 1.00th=[ 74], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 116], 00:13:40.043 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 123], 00:13:40.043 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 131], 95.00th=[ 136], 00:13:40.043 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 188], 99.95th=[ 197], 00:13:40.043 | 99.99th=[ 207] 00:13:40.043 bw ( KiB/s): min=122368, max=138240, per=6.41%, avg=132352.00, stdev=3974.62, samples=20 00:13:40.043 iops : min= 478, max= 540, avg=517.00, stdev=15.53, samples=20 00:13:40.043 lat (msec) : 50=0.25%, 100=1.09%, 250=98.66% 00:13:40.043 cpu : usr=0.35%, sys=2.09%, ctx=1244, majf=0, minf=4097 00:13:40.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:40.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:40.043 issued rwts: total=5233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.043 job3: (groupid=0, jobs=1): err= 0: pid=78527: Fri Jul 12 06:38:18 2024 00:13:40.043 read: IOPS=660, BW=165MiB/s (173MB/s)(1662MiB/10065msec) 00:13:40.043 slat (usec): min=17, max=73852, avg=1481.46, stdev=3412.58 00:13:40.043 clat (msec): min=61, max=170, avg=95.28, stdev=11.42 00:13:40.043 lat (msec): min=61, max=170, avg=96.76, stdev=11.52 00:13:40.043 clat percentiles (msec): 00:13:40.043 | 1.00th=[ 74], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 87], 00:13:40.043 | 30.00th=[ 89], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 00:13:40.043 | 70.00th=[ 101], 80.00th=[ 104], 90.00th=[ 110], 95.00th=[ 115], 00:13:40.043 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 155], 99.95th=[ 161], 00:13:40.043 | 99.99th=[ 171] 00:13:40.043 bw ( KiB/s): min=118784, max=181760, per=8.16%, avg=168558.60, stdev=14658.43, samples=20 00:13:40.043 iops : min= 464, max= 710, avg=658.40, stdev=57.25, samples=20 00:13:40.043 lat (msec) : 100=70.85%, 250=29.15% 00:13:40.043 cpu : usr=0.35%, sys=2.70%, ctx=1481, majf=0, minf=4097 00:13:40.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:40.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:40.043 issued rwts: total=6648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.043 job4: (groupid=0, jobs=1): err= 0: pid=78528: Fri Jul 12 06:38:18 2024 00:13:40.043 read: IOPS=518, BW=130MiB/s (136MB/s)(1308MiB/10094msec) 00:13:40.043 slat (usec): min=21, max=51813, avg=1906.85, stdev=4135.07 00:13:40.043 clat (msec): min=17, max=210, avg=121.45, stdev=10.15 00:13:40.043 lat (msec): min=18, max=210, avg=123.36, stdev=10.32 00:13:40.043 clat percentiles (msec): 00:13:40.043 | 1.00th=[ 103], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 117], 00:13:40.043 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 123], 00:13:40.043 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 134], 00:13:40.043 | 99.00th=[ 142], 99.50th=[ 161], 99.90th=[ 205], 99.95th=[ 211], 00:13:40.043 | 99.99th=[ 211] 00:13:40.043 bw ( KiB/s): min=123639, max=137216, per=6.40%, avg=132287.55, stdev=3448.69, samples=20 00:13:40.043 iops : min= 482, max= 536, avg=516.70, stdev=13.60, samples=20 00:13:40.043 lat (msec) : 20=0.02%, 50=0.42%, 100=0.40%, 250=99.16% 00:13:40.043 cpu : usr=0.30%, sys=2.58%, ctx=1266, majf=0, minf=4097 00:13:40.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:40.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:40.043 issued rwts: total=5230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.043 job5: (groupid=0, jobs=1): err= 0: pid=78529: Fri Jul 12 06:38:18 2024 00:13:40.043 read: IOPS=516, BW=129MiB/s (135MB/s)(1304MiB/10090msec) 00:13:40.043 slat (usec): min=21, max=69423, avg=1911.99, stdev=4289.74 00:13:40.043 clat (msec): min=35, max=201, avg=121.77, stdev=10.63 00:13:40.043 lat (msec): min=38, max=204, avg=123.68, stdev=10.92 00:13:40.043 clat percentiles (msec): 00:13:40.043 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 117], 00:13:40.043 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 122], 60.00th=[ 124], 00:13:40.043 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 131], 95.00th=[ 136], 00:13:40.043 | 99.00th=[ 144], 99.50th=[ 169], 99.90th=[ 203], 99.95th=[ 203], 00:13:40.043 | 99.99th=[ 203] 00:13:40.043 bw ( KiB/s): min=121101, max=136704, per=6.38%, avg=131865.05, stdev=3857.51, samples=20 00:13:40.043 iops : min= 473, max= 534, avg=514.95, stdev=14.99, samples=20 00:13:40.043 lat (msec) : 50=0.71%, 100=0.21%, 250=99.08% 00:13:40.043 cpu : usr=0.27%, sys=2.36%, ctx=1244, majf=0, minf=4097 00:13:40.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:40.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:40.043 issued rwts: total=5215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.043 job6: (groupid=0, jobs=1): err= 0: pid=78530: Fri Jul 12 06:38:18 2024 00:13:40.043 read: IOPS=691, BW=173MiB/s (181MB/s)(1739MiB/10056msec) 00:13:40.043 slat (usec): min=17, max=45223, avg=1433.94, stdev=3247.03 00:13:40.043 clat (msec): min=48, max=147, avg=90.97, stdev= 9.90 00:13:40.043 lat (msec): min=48, max=147, avg=92.40, stdev= 9.87 00:13:40.043 clat percentiles (msec): 00:13:40.043 | 1.00th=[ 66], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 85], 00:13:40.043 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 93], 00:13:40.043 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 103], 95.00th=[ 107], 00:13:40.043 | 99.00th=[ 118], 99.50th=[ 126], 99.90th=[ 140], 99.95th=[ 144], 00:13:40.043 | 99.99th=[ 148] 00:13:40.043 bw ( KiB/s): min=164352, max=183296, per=8.54%, avg=176486.40, stdev=5474.48, samples=20 00:13:40.043 iops : min= 642, max= 716, avg=689.40, stdev=21.38, samples=20 00:13:40.043 lat (msec) : 50=0.03%, 100=85.65%, 250=14.32% 00:13:40.043 cpu : usr=0.32%, sys=2.04%, ctx=1655, majf=0, minf=4097 00:13:40.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:40.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:40.043 issued rwts: total=6957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.044 job7: (groupid=0, jobs=1): err= 0: pid=78531: Fri Jul 12 06:38:18 2024 00:13:40.044 read: IOPS=672, BW=168MiB/s (176MB/s)(1693MiB/10071msec) 00:13:40.044 slat (usec): min=17, max=39813, avg=1472.05, stdev=3236.93 00:13:40.044 clat (msec): min=53, max=164, avg=93.57, stdev=10.34 00:13:40.044 lat (msec): min=54, max=164, avg=95.04, stdev=10.34 00:13:40.044 clat percentiles (msec): 00:13:40.044 | 1.00th=[ 66], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 86], 00:13:40.044 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:13:40.044 | 70.00th=[ 99], 80.00th=[ 102], 90.00th=[ 107], 95.00th=[ 111], 00:13:40.044 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 155], 00:13:40.044 | 99.99th=[ 165] 00:13:40.044 bw ( KiB/s): min=145408, max=183808, per=8.31%, avg=171791.35, stdev=8878.03, samples=20 00:13:40.044 iops : min= 568, max= 718, avg=671.05, stdev=34.70, samples=20 00:13:40.044 lat (msec) : 100=74.93%, 250=25.07% 00:13:40.044 cpu : usr=0.37%, sys=2.48%, ctx=1508, majf=0, minf=4097 00:13:40.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:40.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:40.044 issued rwts: total=6773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.044 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.044 job8: (groupid=0, jobs=1): err= 0: pid=78532: Fri Jul 12 06:38:18 2024 00:13:40.044 read: IOPS=526, BW=132MiB/s (138MB/s)(1330MiB/10099msec) 00:13:40.044 slat (usec): min=17, max=33585, avg=1860.58, stdev=4201.23 00:13:40.044 clat (msec): min=34, max=219, avg=119.46, stdev=14.34 00:13:40.044 lat (msec): min=34, max=219, avg=121.32, stdev=14.62 00:13:40.044 clat percentiles (msec): 00:13:40.044 | 1.00th=[ 63], 5.00th=[ 93], 10.00th=[ 110], 20.00th=[ 116], 00:13:40.044 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 122], 60.00th=[ 124], 00:13:40.044 | 70.00th=[ 126], 80.00th=[ 128], 90.00th=[ 131], 95.00th=[ 136], 00:13:40.044 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 211], 99.95th=[ 211], 00:13:40.044 | 99.99th=[ 220] 00:13:40.044 bw ( KiB/s): min=125440, max=162629, per=6.51%, avg=134569.85, stdev=8732.01, samples=20 00:13:40.044 iops : min= 490, max= 635, avg=525.65, stdev=34.06, samples=20 00:13:40.044 lat (msec) : 50=0.60%, 100=6.66%, 250=92.74% 00:13:40.044 cpu : usr=0.23%, sys=1.63%, ctx=1385, majf=0, minf=4097 00:13:40.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:40.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:40.044 issued rwts: total=5319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.044 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.044 job9: (groupid=0, jobs=1): err= 0: pid=78533: Fri Jul 12 06:38:18 2024 00:13:40.044 read: IOPS=692, BW=173MiB/s (181MB/s)(1739MiB/10053msec) 00:13:40.044 slat (usec): min=21, max=27419, avg=1433.04, stdev=3183.64 00:13:40.044 clat (msec): min=28, max=147, avg=90.92, stdev= 8.82 00:13:40.044 lat (msec): min=28, max=148, avg=92.35, stdev= 8.85 00:13:40.044 clat percentiles (msec): 00:13:40.044 | 1.00th=[ 71], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 85], 00:13:40.044 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 93], 00:13:40.044 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 102], 95.00th=[ 105], 00:13:40.044 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 146], 99.95th=[ 146], 00:13:40.044 | 99.99th=[ 148] 00:13:40.044 bw ( KiB/s): min=159744, max=183296, per=8.54%, avg=176468.20, stdev=5542.63, samples=20 00:13:40.044 iops : min= 624, max= 716, avg=689.30, stdev=21.62, samples=20 00:13:40.044 lat (msec) : 50=0.09%, 100=87.05%, 250=12.86% 00:13:40.044 cpu : usr=0.32%, sys=2.85%, ctx=1472, majf=0, minf=4097 00:13:40.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:40.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:40.044 issued rwts: total=6957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.044 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.044 job10: (groupid=0, jobs=1): err= 0: pid=78534: Fri Jul 12 06:38:18 2024 00:13:40.044 read: IOPS=675, BW=169MiB/s (177MB/s)(1700MiB/10067msec) 00:13:40.044 slat (usec): min=21, max=23766, avg=1466.52, stdev=3152.65 00:13:40.044 clat (msec): min=37, max=167, avg=93.21, stdev=10.51 00:13:40.044 lat (msec): min=38, max=167, avg=94.68, stdev=10.57 00:13:40.044 clat percentiles (msec): 00:13:40.044 | 1.00th=[ 72], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 86], 00:13:40.044 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 95], 00:13:40.044 | 70.00th=[ 99], 80.00th=[ 102], 90.00th=[ 107], 95.00th=[ 110], 00:13:40.044 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 159], 99.95th=[ 159], 00:13:40.044 | 99.99th=[ 169] 00:13:40.044 bw ( KiB/s): min=139264, max=180736, per=8.34%, avg=172399.55, stdev=8972.03, samples=20 00:13:40.044 iops : min= 544, max= 706, avg=673.40, stdev=35.08, samples=20 00:13:40.044 lat (msec) : 50=0.28%, 100=76.36%, 250=23.36% 00:13:40.044 cpu : usr=0.31%, sys=3.03%, ctx=1523, majf=0, minf=4097 00:13:40.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:40.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:40.044 issued rwts: total=6798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.044 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.044 00:13:40.044 Run status group 0 (all jobs): 00:13:40.044 READ: bw=2018MiB/s (2116MB/s), 129MiB/s-484MiB/s (135MB/s-507MB/s), io=19.9GiB (21.4GB), run=10011-10099msec 00:13:40.044 00:13:40.044 Disk stats (read/write): 00:13:40.044 nvme0n1: ios=13913/0, merge=0/0, ticks=1236637/0, in_queue=1236637, util=97.81% 00:13:40.044 nvme10n1: ios=38670/0, merge=0/0, ticks=1244116/0, in_queue=1244116, util=97.90% 00:13:40.044 nvme1n1: ios=10354/0, merge=0/0, ticks=1228744/0, in_queue=1228744, util=98.16% 00:13:40.044 nvme2n1: ios=13200/0, merge=0/0, ticks=1235244/0, in_queue=1235244, util=98.16% 00:13:40.044 nvme3n1: ios=10349/0, merge=0/0, ticks=1229254/0, in_queue=1229254, util=98.33% 00:13:40.044 nvme4n1: ios=10321/0, merge=0/0, ticks=1230665/0, in_queue=1230665, util=98.50% 00:13:40.044 nvme5n1: ios=13803/0, merge=0/0, ticks=1237185/0, in_queue=1237185, util=98.63% 00:13:40.044 nvme6n1: ios=13438/0, merge=0/0, ticks=1236325/0, in_queue=1236325, util=98.68% 00:13:40.044 nvme7n1: ios=10524/0, merge=0/0, ticks=1231421/0, in_queue=1231421, util=98.94% 00:13:40.044 nvme8n1: ios=13814/0, merge=0/0, ticks=1236636/0, in_queue=1236636, util=99.04% 00:13:40.044 nvme9n1: ios=13483/0, merge=0/0, ticks=1235779/0, in_queue=1235779, util=99.17% 00:13:40.044 06:38:18 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:13:40.044 [global] 00:13:40.044 thread=1 00:13:40.044 invalidate=1 00:13:40.044 rw=randwrite 00:13:40.044 time_based=1 00:13:40.044 runtime=10 00:13:40.044 ioengine=libaio 00:13:40.044 direct=1 00:13:40.044 bs=262144 00:13:40.044 iodepth=64 00:13:40.044 norandommap=1 00:13:40.044 numjobs=1 00:13:40.044 00:13:40.044 [job0] 00:13:40.044 filename=/dev/nvme0n1 00:13:40.044 [job1] 00:13:40.044 filename=/dev/nvme10n1 00:13:40.044 [job2] 00:13:40.044 filename=/dev/nvme1n1 00:13:40.044 [job3] 00:13:40.044 filename=/dev/nvme2n1 00:13:40.044 [job4] 00:13:40.044 filename=/dev/nvme3n1 00:13:40.044 [job5] 00:13:40.044 filename=/dev/nvme4n1 00:13:40.044 [job6] 00:13:40.044 filename=/dev/nvme5n1 00:13:40.044 [job7] 00:13:40.044 filename=/dev/nvme6n1 00:13:40.044 [job8] 00:13:40.044 filename=/dev/nvme7n1 00:13:40.044 [job9] 00:13:40.044 filename=/dev/nvme8n1 00:13:40.045 [job10] 00:13:40.045 filename=/dev/nvme9n1 00:13:40.045 Could not set queue depth (nvme0n1) 00:13:40.045 Could not set queue depth (nvme10n1) 00:13:40.045 Could not set queue depth (nvme1n1) 00:13:40.045 Could not set queue depth (nvme2n1) 00:13:40.045 Could not set queue depth (nvme3n1) 00:13:40.045 Could not set queue depth (nvme4n1) 00:13:40.045 Could not set queue depth (nvme5n1) 00:13:40.045 Could not set queue depth (nvme6n1) 00:13:40.045 Could not set queue depth (nvme7n1) 00:13:40.045 Could not set queue depth (nvme8n1) 00:13:40.045 Could not set queue depth (nvme9n1) 00:13:40.045 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:40.045 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:40.045 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:40.045 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:40.045 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:40.045 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:40.045 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:40.045 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:40.045 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:40.045 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:40.045 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:40.045 fio-3.35 00:13:40.045 Starting 11 threads 00:13:50.027 00:13:50.027 job0: (groupid=0, jobs=1): err= 0: pid=78728: Fri Jul 12 06:38:28 2024 00:13:50.027 write: IOPS=512, BW=128MiB/s (134MB/s)(1296MiB/10114msec); 0 zone resets 00:13:50.027 slat (usec): min=19, max=19642, avg=1906.86, stdev=3280.96 00:13:50.027 clat (msec): min=5, max=235, avg=122.91, stdev=12.04 00:13:50.027 lat (msec): min=6, max=235, avg=124.81, stdev=11.77 00:13:50.027 clat percentiles (msec): 00:13:50.027 | 1.00th=[ 82], 5.00th=[ 115], 10.00th=[ 116], 20.00th=[ 118], 00:13:50.027 | 30.00th=[ 122], 40.00th=[ 123], 50.00th=[ 124], 60.00th=[ 124], 00:13:50.027 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 130], 95.00th=[ 133], 00:13:50.027 | 99.00th=[ 157], 99.50th=[ 190], 99.90th=[ 228], 99.95th=[ 228], 00:13:50.027 | 99.99th=[ 236] 00:13:50.027 bw ( KiB/s): min=116736, max=138752, per=8.72%, avg=131097.60, stdev=4807.22, samples=20 00:13:50.027 iops : min= 456, max= 542, avg=512.10, stdev=18.78, samples=20 00:13:50.027 lat (msec) : 10=0.10%, 20=0.08%, 50=0.39%, 100=0.64%, 250=98.80% 00:13:50.027 cpu : usr=0.99%, sys=1.53%, ctx=5489, majf=0, minf=1 00:13:50.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:50.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:50.027 issued rwts: total=0,5184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.027 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.027 job1: (groupid=0, jobs=1): err= 0: pid=78729: Fri Jul 12 06:38:28 2024 00:13:50.027 write: IOPS=403, BW=101MiB/s (106MB/s)(1023MiB/10139msec); 0 zone resets 00:13:50.027 slat (usec): min=19, max=42470, avg=2411.33, stdev=4234.69 00:13:50.027 clat (msec): min=49, max=287, avg=156.15, stdev=14.15 00:13:50.027 lat (msec): min=49, max=287, avg=158.56, stdev=13.79 00:13:50.027 clat percentiles (msec): 00:13:50.027 | 1.00th=[ 92], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 150], 00:13:50.027 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:13:50.027 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 167], 00:13:50.027 | 99.00th=[ 190], 99.50th=[ 239], 99.90th=[ 279], 99.95th=[ 279], 00:13:50.027 | 99.99th=[ 288] 00:13:50.027 bw ( KiB/s): min=98304, max=104448, per=6.86%, avg=103101.55, stdev=1969.81, samples=20 00:13:50.027 iops : min= 384, max= 408, avg=402.70, stdev= 7.69, samples=20 00:13:50.027 lat (msec) : 50=0.10%, 100=1.20%, 250=98.36%, 500=0.34% 00:13:50.027 cpu : usr=0.59%, sys=0.94%, ctx=5148, majf=0, minf=1 00:13:50.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:50.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:50.027 issued rwts: total=0,4090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.027 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.027 job2: (groupid=0, jobs=1): err= 0: pid=78741: Fri Jul 12 06:38:28 2024 00:13:50.027 write: IOPS=523, BW=131MiB/s (137MB/s)(1323MiB/10108msec); 0 zone resets 00:13:50.027 slat (usec): min=20, max=12089, avg=1875.56, stdev=3214.30 00:13:50.027 clat (msec): min=4, max=227, avg=120.30, stdev=12.30 00:13:50.027 lat (msec): min=4, max=227, avg=122.18, stdev=12.09 00:13:50.027 clat percentiles (msec): 00:13:50.027 | 1.00th=[ 67], 5.00th=[ 113], 10.00th=[ 114], 20.00th=[ 116], 00:13:50.027 | 30.00th=[ 121], 40.00th=[ 122], 50.00th=[ 122], 60.00th=[ 123], 00:13:50.027 | 70.00th=[ 123], 80.00th=[ 124], 90.00th=[ 127], 95.00th=[ 129], 00:13:50.027 | 99.00th=[ 138], 99.50th=[ 176], 99.90th=[ 220], 99.95th=[ 220], 00:13:50.027 | 99.99th=[ 228] 00:13:50.027 bw ( KiB/s): min=126976, max=144896, per=8.91%, avg=133862.40, stdev=3686.38, samples=20 00:13:50.027 iops : min= 496, max= 566, avg=522.90, stdev=14.40, samples=20 00:13:50.027 lat (msec) : 10=0.08%, 20=0.13%, 50=0.57%, 100=1.68%, 250=97.54% 00:13:50.027 cpu : usr=0.98%, sys=1.51%, ctx=4948, majf=0, minf=1 00:13:50.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:50.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:50.028 issued rwts: total=0,5292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.028 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.028 job3: (groupid=0, jobs=1): err= 0: pid=78742: Fri Jul 12 06:38:28 2024 00:13:50.028 write: IOPS=399, BW=99.8MiB/s (105MB/s)(1012MiB/10138msec); 0 zone resets 00:13:50.028 slat (usec): min=18, max=60629, avg=2464.65, stdev=4313.23 00:13:50.028 clat (msec): min=62, max=292, avg=157.72, stdev=13.00 00:13:50.028 lat (msec): min=62, max=292, avg=160.18, stdev=12.46 00:13:50.028 clat percentiles (msec): 00:13:50.028 | 1.00th=[ 144], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 153], 00:13:50.028 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:13:50.028 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 169], 00:13:50.028 | 99.00th=[ 205], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 284], 00:13:50.028 | 99.99th=[ 292] 00:13:50.028 bw ( KiB/s): min=83968, max=104448, per=6.79%, avg=102041.60, stdev=4686.97, samples=20 00:13:50.028 iops : min= 328, max= 408, avg=398.60, stdev=18.31, samples=20 00:13:50.028 lat (msec) : 100=0.49%, 250=99.06%, 500=0.44% 00:13:50.028 cpu : usr=0.72%, sys=1.15%, ctx=5105, majf=0, minf=1 00:13:50.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:50.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:50.028 issued rwts: total=0,4049,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.028 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.028 job4: (groupid=0, jobs=1): err= 0: pid=78743: Fri Jul 12 06:38:28 2024 00:13:50.028 write: IOPS=510, BW=128MiB/s (134MB/s)(1293MiB/10118msec); 0 zone resets 00:13:50.028 slat (usec): min=19, max=34091, avg=1930.92, stdev=3326.88 00:13:50.028 clat (msec): min=7, max=237, avg=123.27, stdev=12.19 00:13:50.028 lat (msec): min=7, max=237, avg=125.20, stdev=11.89 00:13:50.028 clat percentiles (msec): 00:13:50.028 | 1.00th=[ 97], 5.00th=[ 115], 10.00th=[ 116], 20.00th=[ 120], 00:13:50.028 | 30.00th=[ 123], 40.00th=[ 123], 50.00th=[ 124], 60.00th=[ 124], 00:13:50.028 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 130], 95.00th=[ 133], 00:13:50.028 | 99.00th=[ 167], 99.50th=[ 192], 99.90th=[ 230], 99.95th=[ 230], 00:13:50.028 | 99.99th=[ 239] 00:13:50.028 bw ( KiB/s): min=116736, max=135168, per=8.70%, avg=130739.20, stdev=4274.91, samples=20 00:13:50.028 iops : min= 456, max= 528, avg=510.70, stdev=16.70, samples=20 00:13:50.028 lat (msec) : 10=0.08%, 20=0.08%, 50=0.31%, 100=0.54%, 250=98.99% 00:13:50.028 cpu : usr=0.78%, sys=1.09%, ctx=7348, majf=0, minf=1 00:13:50.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:50.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:50.028 issued rwts: total=0,5170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.028 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.028 job5: (groupid=0, jobs=1): err= 0: pid=78749: Fri Jul 12 06:38:28 2024 00:13:50.028 write: IOPS=1187, BW=297MiB/s (311MB/s)(2985MiB/10050msec); 0 zone resets 00:13:50.028 slat (usec): min=17, max=7191, avg=826.93, stdev=1391.51 00:13:50.028 clat (msec): min=9, max=100, avg=53.04, stdev= 3.72 00:13:50.028 lat (msec): min=9, max=100, avg=53.86, stdev= 3.57 00:13:50.028 clat percentiles (usec): 00:13:50.028 | 1.00th=[49021], 5.00th=[49546], 10.00th=[50070], 20.00th=[50594], 00:13:50.028 | 30.00th=[52167], 40.00th=[52691], 50.00th=[53216], 60.00th=[53216], 00:13:50.028 | 70.00th=[53740], 80.00th=[54264], 90.00th=[55837], 95.00th=[56886], 00:13:50.028 | 99.00th=[58459], 99.50th=[70779], 99.90th=[93848], 99.95th=[94897], 00:13:50.028 | 99.99th=[96994] 00:13:50.028 bw ( KiB/s): min=288256, max=311808, per=20.23%, avg=304000.00, stdev=6398.92, samples=20 00:13:50.028 iops : min= 1126, max= 1218, avg=1187.50, stdev=25.00, samples=20 00:13:50.028 lat (msec) : 10=0.06%, 20=0.06%, 50=8.97%, 100=90.90%, 250=0.01% 00:13:50.028 cpu : usr=1.64%, sys=2.38%, ctx=12907, majf=0, minf=1 00:13:50.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:50.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:50.028 issued rwts: total=0,11938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.028 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.028 job6: (groupid=0, jobs=1): err= 0: pid=78751: Fri Jul 12 06:38:28 2024 00:13:50.028 write: IOPS=518, BW=130MiB/s (136MB/s)(1310MiB/10104msec); 0 zone resets 00:13:50.028 slat (usec): min=20, max=88905, avg=1874.16, stdev=3429.91 00:13:50.028 clat (msec): min=45, max=246, avg=121.48, stdev=11.42 00:13:50.028 lat (msec): min=47, max=246, avg=123.36, stdev=10.95 00:13:50.028 clat percentiles (msec): 00:13:50.028 | 1.00th=[ 94], 5.00th=[ 114], 10.00th=[ 114], 20.00th=[ 116], 00:13:50.028 | 30.00th=[ 121], 40.00th=[ 122], 50.00th=[ 122], 60.00th=[ 123], 00:13:50.028 | 70.00th=[ 124], 80.00th=[ 124], 90.00th=[ 127], 95.00th=[ 129], 00:13:50.028 | 99.00th=[ 174], 99.50th=[ 203], 99.90th=[ 232], 99.95th=[ 241], 00:13:50.028 | 99.99th=[ 247] 00:13:50.028 bw ( KiB/s): min=118035, max=135168, per=8.82%, avg=132519.35, stdev=4086.79, samples=20 00:13:50.028 iops : min= 461, max= 528, avg=517.65, stdev=15.98, samples=20 00:13:50.028 lat (msec) : 50=0.06%, 100=1.26%, 250=98.68% 00:13:50.028 cpu : usr=0.95%, sys=1.50%, ctx=4533, majf=0, minf=1 00:13:50.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:50.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:50.028 issued rwts: total=0,5240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.028 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.028 job7: (groupid=0, jobs=1): err= 0: pid=78752: Fri Jul 12 06:38:28 2024 00:13:50.028 write: IOPS=522, BW=131MiB/s (137MB/s)(1322MiB/10111msec); 0 zone resets 00:13:50.028 slat (usec): min=16, max=12451, avg=1885.89, stdev=3224.00 00:13:50.028 clat (msec): min=9, max=228, avg=120.42, stdev=12.12 00:13:50.028 lat (msec): min=9, max=228, avg=122.30, stdev=11.88 00:13:50.028 clat percentiles (msec): 00:13:50.028 | 1.00th=[ 67], 5.00th=[ 113], 10.00th=[ 114], 20.00th=[ 116], 00:13:50.028 | 30.00th=[ 121], 40.00th=[ 122], 50.00th=[ 122], 60.00th=[ 123], 00:13:50.028 | 70.00th=[ 124], 80.00th=[ 124], 90.00th=[ 128], 95.00th=[ 129], 00:13:50.028 | 99.00th=[ 134], 99.50th=[ 176], 99.90th=[ 222], 99.95th=[ 222], 00:13:50.028 | 99.99th=[ 228] 00:13:50.028 bw ( KiB/s): min=126976, max=142621, per=8.90%, avg=133774.25, stdev=3226.84, samples=20 00:13:50.028 iops : min= 496, max= 557, avg=522.55, stdev=12.59, samples=20 00:13:50.028 lat (msec) : 10=0.04%, 20=0.15%, 50=0.53%, 100=1.68%, 250=97.60% 00:13:50.028 cpu : usr=0.99%, sys=1.67%, ctx=3854, majf=0, minf=1 00:13:50.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:50.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:50.028 issued rwts: total=0,5288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.028 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.028 job8: (groupid=0, jobs=1): err= 0: pid=78753: Fri Jul 12 06:38:28 2024 00:13:50.028 write: IOPS=400, BW=100MiB/s (105MB/s)(1015MiB/10139msec); 0 zone resets 00:13:50.028 slat (usec): min=17, max=53803, avg=2457.49, stdev=4275.93 00:13:50.028 clat (msec): min=23, max=292, avg=157.30, stdev=16.44 00:13:50.028 lat (msec): min=23, max=292, avg=159.76, stdev=16.12 00:13:50.028 clat percentiles (msec): 00:13:50.028 | 1.00th=[ 91], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 150], 00:13:50.028 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:13:50.028 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 169], 00:13:50.028 | 99.00th=[ 213], 99.50th=[ 243], 99.90th=[ 284], 99.95th=[ 284], 00:13:50.028 | 99.99th=[ 292] 00:13:50.028 bw ( KiB/s): min=92160, max=106496, per=6.81%, avg=102323.20, stdev=3290.06, samples=20 00:13:50.028 iops : min= 360, max= 416, avg=399.70, stdev=12.85, samples=20 00:13:50.029 lat (msec) : 50=0.49%, 100=0.69%, 250=98.37%, 500=0.44% 00:13:50.029 cpu : usr=0.84%, sys=1.23%, ctx=5000, majf=0, minf=1 00:13:50.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:50.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:50.029 issued rwts: total=0,4060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.029 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.029 job9: (groupid=0, jobs=1): err= 0: pid=78754: Fri Jul 12 06:38:28 2024 00:13:50.029 write: IOPS=402, BW=101MiB/s (105MB/s)(1019MiB/10140msec); 0 zone resets 00:13:50.029 slat (usec): min=18, max=29256, avg=2448.02, stdev=4216.77 00:13:50.029 clat (msec): min=22, max=291, avg=156.66, stdev=15.93 00:13:50.029 lat (msec): min=22, max=291, avg=159.11, stdev=15.61 00:13:50.029 clat percentiles (msec): 00:13:50.029 | 1.00th=[ 89], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 150], 00:13:50.029 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:13:50.029 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 169], 00:13:50.029 | 99.00th=[ 194], 99.50th=[ 243], 99.90th=[ 284], 99.95th=[ 284], 00:13:50.029 | 99.99th=[ 292] 00:13:50.029 bw ( KiB/s): min=98304, max=106496, per=6.84%, avg=102768.25, stdev=2306.68, samples=20 00:13:50.029 iops : min= 384, max= 416, avg=401.40, stdev= 9.09, samples=20 00:13:50.029 lat (msec) : 50=0.49%, 100=0.69%, 250=98.38%, 500=0.44% 00:13:50.029 cpu : usr=0.73%, sys=1.29%, ctx=5386, majf=0, minf=1 00:13:50.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:50.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:50.029 issued rwts: total=0,4077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.029 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.029 job10: (groupid=0, jobs=1): err= 0: pid=78755: Fri Jul 12 06:38:28 2024 00:13:50.029 write: IOPS=508, BW=127MiB/s (133MB/s)(1286MiB/10120msec); 0 zone resets 00:13:50.029 slat (usec): min=18, max=70229, avg=1938.99, stdev=3424.28 00:13:50.029 clat (msec): min=73, max=236, avg=123.90, stdev= 9.92 00:13:50.029 lat (msec): min=73, max=236, avg=125.83, stdev= 9.44 00:13:50.029 clat percentiles (msec): 00:13:50.029 | 1.00th=[ 113], 5.00th=[ 115], 10.00th=[ 116], 20.00th=[ 120], 00:13:50.029 | 30.00th=[ 122], 40.00th=[ 123], 50.00th=[ 124], 60.00th=[ 124], 00:13:50.029 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 130], 95.00th=[ 133], 00:13:50.029 | 99.00th=[ 176], 99.50th=[ 201], 99.90th=[ 222], 99.95th=[ 230], 00:13:50.029 | 99.99th=[ 236] 00:13:50.029 bw ( KiB/s): min=102912, max=135168, per=8.66%, avg=130099.20, stdev=7008.19, samples=20 00:13:50.029 iops : min= 402, max= 528, avg=508.20, stdev=27.38, samples=20 00:13:50.029 lat (msec) : 100=0.17%, 250=99.83% 00:13:50.029 cpu : usr=0.88%, sys=1.54%, ctx=5929, majf=0, minf=1 00:13:50.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:50.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:50.029 issued rwts: total=0,5145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.029 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.029 00:13:50.029 Run status group 0 (all jobs): 00:13:50.029 WRITE: bw=1468MiB/s (1539MB/s), 99.8MiB/s-297MiB/s (105MB/s-311MB/s), io=14.5GiB (15.6GB), run=10050-10140msec 00:13:50.029 00:13:50.029 Disk stats (read/write): 00:13:50.029 nvme0n1: ios=50/10230, merge=0/0, ticks=56/1213645, in_queue=1213701, util=97.92% 00:13:50.029 nvme10n1: ios=49/8039, merge=0/0, ticks=56/1212320, in_queue=1212376, util=97.97% 00:13:50.029 nvme1n1: ios=43/10437, merge=0/0, ticks=51/1213289, in_queue=1213340, util=98.02% 00:13:50.029 nvme2n1: ios=30/7963, merge=0/0, ticks=37/1212069, in_queue=1212106, util=98.07% 00:13:50.029 nvme3n1: ios=5/10211, merge=0/0, ticks=5/1215123, in_queue=1215128, util=98.11% 00:13:50.029 nvme4n1: ios=0/23728, merge=0/0, ticks=0/1217433, in_queue=1217433, util=98.26% 00:13:50.029 nvme5n1: ios=0/10331, merge=0/0, ticks=0/1213360, in_queue=1213360, util=98.20% 00:13:50.029 nvme6n1: ios=0/10433, merge=0/0, ticks=0/1213512, in_queue=1213512, util=98.37% 00:13:50.029 nvme7n1: ios=0/7987, merge=0/0, ticks=0/1211927, in_queue=1211927, util=98.67% 00:13:50.029 nvme8n1: ios=0/8018, merge=0/0, ticks=0/1212363, in_queue=1212363, util=98.79% 00:13:50.029 nvme9n1: ios=0/10133, merge=0/0, ticks=0/1212769, in_queue=1212769, util=98.74% 00:13:50.029 06:38:28 -- target/multiconnection.sh@36 -- # sync 00:13:50.029 06:38:28 -- target/multiconnection.sh@37 -- # seq 1 11 00:13:50.029 06:38:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.029 06:38:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.029 06:38:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:13:50.029 06:38:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:13:50.029 06:38:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.029 06:38:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.029 06:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.029 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:50.029 06:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.029 06:38:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.029 06:38:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:13:50.029 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:13:50.029 06:38:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:13:50.029 06:38:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:13:50.029 06:38:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.029 06:38:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:50.029 06:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.029 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:50.029 06:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.029 06:38:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.029 06:38:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:13:50.029 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:13:50.029 06:38:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:13:50.029 06:38:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:13:50.029 06:38:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.029 06:38:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:50.029 06:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.029 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:50.029 06:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.029 06:38:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.029 06:38:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:13:50.029 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:13:50.029 06:38:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:13:50.029 06:38:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:13:50.029 06:38:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.029 06:38:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:50.029 06:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.029 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:50.029 06:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.029 06:38:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.029 06:38:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:13:50.029 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:13:50.029 06:38:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:13:50.029 06:38:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:13:50.029 06:38:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.029 06:38:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:13:50.029 06:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.029 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:50.029 06:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.029 06:38:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.029 06:38:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:13:50.029 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:13:50.029 06:38:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:13:50.029 06:38:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.029 06:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:13:50.029 06:38:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.029 06:38:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:13:50.029 06:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.029 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:50.029 06:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.029 06:38:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.029 06:38:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:13:50.029 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:13:50.029 06:38:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:13:50.030 06:38:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.030 06:38:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.030 06:38:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:13:50.030 06:38:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.030 06:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:13:50.030 06:38:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.030 06:38:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:13:50.030 06:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.030 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:50.030 06:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.030 06:38:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.030 06:38:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:13:50.030 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:13:50.030 06:38:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:13:50.030 06:38:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.030 06:38:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.030 06:38:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:13:50.030 06:38:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.030 06:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:13:50.030 06:38:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.030 06:38:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:13:50.030 06:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.030 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:50.030 06:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.030 06:38:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.030 06:38:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:13:50.030 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:13:50.030 06:38:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:13:50.030 06:38:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.030 06:38:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.030 06:38:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:13:50.030 06:38:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.030 06:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:13:50.030 06:38:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.030 06:38:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:13:50.030 06:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.030 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:50.030 06:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.030 06:38:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.030 06:38:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:13:50.030 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:13:50.030 06:38:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:13:50.030 06:38:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.030 06:38:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.030 06:38:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:13:50.030 06:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:13:50.030 06:38:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.030 06:38:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.030 06:38:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:13:50.030 06:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.030 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:50.030 06:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.030 06:38:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.030 06:38:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:13:50.030 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:13:50.030 06:38:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:13:50.030 06:38:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.030 06:38:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.030 06:38:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:13:50.289 06:38:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.289 06:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:13:50.289 06:38:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.289 06:38:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:13:50.289 06:38:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.289 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:50.289 06:38:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.289 06:38:29 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:13:50.289 06:38:29 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:50.289 06:38:29 -- target/multiconnection.sh@47 -- # nvmftestfini 00:13:50.289 06:38:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:50.289 06:38:29 -- nvmf/common.sh@116 -- # sync 00:13:50.289 06:38:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:50.289 06:38:29 -- nvmf/common.sh@119 -- # set +e 00:13:50.289 06:38:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:50.289 06:38:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:50.289 rmmod nvme_tcp 00:13:50.289 rmmod nvme_fabrics 00:13:50.289 rmmod nvme_keyring 00:13:50.289 06:38:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:50.289 06:38:30 -- nvmf/common.sh@123 -- # set -e 00:13:50.289 06:38:30 -- nvmf/common.sh@124 -- # return 0 00:13:50.289 06:38:30 -- nvmf/common.sh@477 -- # '[' -n 78066 ']' 00:13:50.289 06:38:30 -- nvmf/common.sh@478 -- # killprocess 78066 00:13:50.289 06:38:30 -- common/autotest_common.sh@926 -- # '[' -z 78066 ']' 00:13:50.289 06:38:30 -- common/autotest_common.sh@930 -- # kill -0 78066 00:13:50.289 06:38:30 -- common/autotest_common.sh@931 -- # uname 00:13:50.289 06:38:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:50.289 06:38:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78066 00:13:50.289 killing process with pid 78066 00:13:50.289 06:38:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:50.289 06:38:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:50.289 06:38:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78066' 00:13:50.289 06:38:30 -- common/autotest_common.sh@945 -- # kill 78066 00:13:50.289 06:38:30 -- common/autotest_common.sh@950 -- # wait 78066 00:13:50.567 06:38:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:50.567 06:38:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:50.567 06:38:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:50.567 06:38:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.567 06:38:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:50.567 06:38:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.567 06:38:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.567 06:38:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.567 06:38:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:50.567 00:13:50.567 real 0m48.707s 00:13:50.567 user 2m38.832s 00:13:50.567 sys 0m35.089s 00:13:50.567 06:38:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.567 ************************************ 00:13:50.567 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:50.567 END TEST nvmf_multiconnection 00:13:50.567 ************************************ 00:13:50.567 06:38:30 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:50.567 06:38:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:50.567 06:38:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:50.567 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:50.567 ************************************ 00:13:50.567 START TEST nvmf_initiator_timeout 00:13:50.567 ************************************ 00:13:50.567 06:38:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:50.826 * Looking for test storage... 00:13:50.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:50.826 06:38:30 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.826 06:38:30 -- nvmf/common.sh@7 -- # uname -s 00:13:50.826 06:38:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.826 06:38:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.826 06:38:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.826 06:38:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.826 06:38:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.826 06:38:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.826 06:38:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.826 06:38:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.826 06:38:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.826 06:38:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.826 06:38:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:13:50.826 06:38:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:13:50.826 06:38:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.826 06:38:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.826 06:38:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.826 06:38:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.826 06:38:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.826 06:38:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.826 06:38:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.826 06:38:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.826 06:38:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.826 06:38:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.826 06:38:30 -- paths/export.sh@5 -- # export PATH 00:13:50.826 06:38:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.826 06:38:30 -- nvmf/common.sh@46 -- # : 0 00:13:50.826 06:38:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:50.826 06:38:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:50.826 06:38:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:50.826 06:38:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.826 06:38:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.826 06:38:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:50.826 06:38:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:50.826 06:38:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:50.826 06:38:30 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.826 06:38:30 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.826 06:38:30 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:13:50.826 06:38:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:50.826 06:38:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.826 06:38:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:50.826 06:38:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:50.826 06:38:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:50.826 06:38:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.826 06:38:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.826 06:38:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.826 06:38:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:50.826 06:38:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:50.826 06:38:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:50.826 06:38:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:50.826 06:38:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:50.826 06:38:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:50.826 06:38:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.826 06:38:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.826 06:38:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:50.826 06:38:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:50.826 06:38:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.826 06:38:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.826 06:38:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.826 06:38:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.826 06:38:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.826 06:38:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.826 06:38:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.826 06:38:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.826 06:38:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:50.826 06:38:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:50.826 Cannot find device "nvmf_tgt_br" 00:13:50.826 06:38:30 -- nvmf/common.sh@154 -- # true 00:13:50.826 06:38:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.826 Cannot find device "nvmf_tgt_br2" 00:13:50.826 06:38:30 -- nvmf/common.sh@155 -- # true 00:13:50.826 06:38:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:50.826 06:38:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:50.826 Cannot find device "nvmf_tgt_br" 00:13:50.826 06:38:30 -- nvmf/common.sh@157 -- # true 00:13:50.826 06:38:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:50.826 Cannot find device "nvmf_tgt_br2" 00:13:50.826 06:38:30 -- nvmf/common.sh@158 -- # true 00:13:50.826 06:38:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:50.826 06:38:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:50.826 06:38:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.826 06:38:30 -- nvmf/common.sh@161 -- # true 00:13:50.826 06:38:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.826 06:38:30 -- nvmf/common.sh@162 -- # true 00:13:50.826 06:38:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:50.826 06:38:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:50.826 06:38:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:50.826 06:38:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:51.085 06:38:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:51.085 06:38:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:51.085 06:38:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:51.085 06:38:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:51.085 06:38:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:51.085 06:38:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:51.085 06:38:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:51.085 06:38:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:51.085 06:38:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:51.085 06:38:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:51.085 06:38:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:51.085 06:38:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:51.085 06:38:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:51.085 06:38:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:51.085 06:38:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:51.085 06:38:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:51.085 06:38:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:51.085 06:38:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:51.085 06:38:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:51.085 06:38:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:51.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:51.085 00:13:51.085 --- 10.0.0.2 ping statistics --- 00:13:51.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.085 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:51.085 06:38:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:51.085 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:51.085 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:13:51.085 00:13:51.085 --- 10.0.0.3 ping statistics --- 00:13:51.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.085 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:51.085 06:38:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:51.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:51.085 00:13:51.085 --- 10.0.0.1 ping statistics --- 00:13:51.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.085 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:51.085 06:38:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.085 06:38:30 -- nvmf/common.sh@421 -- # return 0 00:13:51.085 06:38:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:51.085 06:38:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.085 06:38:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:51.085 06:38:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:51.085 06:38:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.085 06:38:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:51.085 06:38:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:51.085 06:38:30 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:13:51.085 06:38:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:51.085 06:38:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:51.085 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:51.085 06:38:30 -- nvmf/common.sh@469 -- # nvmfpid=79119 00:13:51.085 06:38:30 -- nvmf/common.sh@470 -- # waitforlisten 79119 00:13:51.085 06:38:30 -- common/autotest_common.sh@819 -- # '[' -z 79119 ']' 00:13:51.085 06:38:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:51.085 06:38:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.085 06:38:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:51.085 06:38:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.085 06:38:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:51.085 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:51.344 [2024-07-12 06:38:31.028204] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:51.344 [2024-07-12 06:38:31.028302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.344 [2024-07-12 06:38:31.172370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.344 [2024-07-12 06:38:31.213384] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:51.344 [2024-07-12 06:38:31.213573] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.344 [2024-07-12 06:38:31.213588] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.344 [2024-07-12 06:38:31.213600] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.344 [2024-07-12 06:38:31.213672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.344 [2024-07-12 06:38:31.213848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.344 [2024-07-12 06:38:31.214366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.344 [2024-07-12 06:38:31.214422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.278 06:38:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:52.278 06:38:32 -- common/autotest_common.sh@852 -- # return 0 00:13:52.278 06:38:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:52.278 06:38:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:52.278 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 06:38:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.278 06:38:32 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:52.278 06:38:32 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:52.278 06:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.278 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 Malloc0 00:13:52.278 06:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.278 06:38:32 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:13:52.278 06:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.278 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 Delay0 00:13:52.278 06:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.278 06:38:32 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.278 06:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.278 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 [2024-07-12 06:38:32.107517] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.278 06:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.278 06:38:32 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:52.278 06:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.278 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 06:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.278 06:38:32 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.278 06:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.278 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 06:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.278 06:38:32 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.278 06:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.278 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 [2024-07-12 06:38:32.135689] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.278 06:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.278 06:38:32 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:52.536 06:38:32 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:13:52.536 06:38:32 -- common/autotest_common.sh@1177 -- # local i=0 00:13:52.536 06:38:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.536 06:38:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:52.536 06:38:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:54.437 06:38:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:54.437 06:38:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:54.437 06:38:34 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.437 06:38:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:54.437 06:38:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.437 06:38:34 -- common/autotest_common.sh@1187 -- # return 0 00:13:54.437 06:38:34 -- target/initiator_timeout.sh@35 -- # fio_pid=79183 00:13:54.437 06:38:34 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:13:54.437 06:38:34 -- target/initiator_timeout.sh@37 -- # sleep 3 00:13:54.437 [global] 00:13:54.437 thread=1 00:13:54.437 invalidate=1 00:13:54.437 rw=write 00:13:54.437 time_based=1 00:13:54.437 runtime=60 00:13:54.437 ioengine=libaio 00:13:54.437 direct=1 00:13:54.437 bs=4096 00:13:54.437 iodepth=1 00:13:54.437 norandommap=0 00:13:54.437 numjobs=1 00:13:54.437 00:13:54.437 verify_dump=1 00:13:54.437 verify_backlog=512 00:13:54.437 verify_state_save=0 00:13:54.437 do_verify=1 00:13:54.437 verify=crc32c-intel 00:13:54.437 [job0] 00:13:54.437 filename=/dev/nvme0n1 00:13:54.437 Could not set queue depth (nvme0n1) 00:13:54.695 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.695 fio-3.35 00:13:54.695 Starting 1 thread 00:13:57.980 06:38:37 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:13:57.980 06:38:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.980 06:38:37 -- common/autotest_common.sh@10 -- # set +x 00:13:57.980 true 00:13:57.980 06:38:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.980 06:38:37 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:13:57.980 06:38:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.980 06:38:37 -- common/autotest_common.sh@10 -- # set +x 00:13:57.980 true 00:13:57.980 06:38:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.980 06:38:37 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:13:57.980 06:38:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.980 06:38:37 -- common/autotest_common.sh@10 -- # set +x 00:13:57.980 true 00:13:57.980 06:38:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.980 06:38:37 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:13:57.980 06:38:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.980 06:38:37 -- common/autotest_common.sh@10 -- # set +x 00:13:57.980 true 00:13:57.980 06:38:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.980 06:38:37 -- target/initiator_timeout.sh@45 -- # sleep 3 00:14:00.511 06:38:40 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:14:00.511 06:38:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.511 06:38:40 -- common/autotest_common.sh@10 -- # set +x 00:14:00.511 true 00:14:00.511 06:38:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.511 06:38:40 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:14:00.511 06:38:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.511 06:38:40 -- common/autotest_common.sh@10 -- # set +x 00:14:00.511 true 00:14:00.511 06:38:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.511 06:38:40 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:14:00.511 06:38:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.511 06:38:40 -- common/autotest_common.sh@10 -- # set +x 00:14:00.511 true 00:14:00.511 06:38:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.511 06:38:40 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:14:00.511 06:38:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.511 06:38:40 -- common/autotest_common.sh@10 -- # set +x 00:14:00.511 true 00:14:00.511 06:38:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.511 06:38:40 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:14:00.511 06:38:40 -- target/initiator_timeout.sh@54 -- # wait 79183 00:14:56.835 00:14:56.835 job0: (groupid=0, jobs=1): err= 0: pid=79204: Fri Jul 12 06:39:34 2024 00:14:56.835 read: IOPS=810, BW=3243KiB/s (3320kB/s)(190MiB/60000msec) 00:14:56.835 slat (usec): min=10, max=11469, avg=14.78, stdev=65.43 00:14:56.835 clat (usec): min=150, max=40866k, avg=1040.41, stdev=185293.54 00:14:56.835 lat (usec): min=163, max=40866k, avg=1055.19, stdev=185293.55 00:14:56.835 clat percentiles (usec): 00:14:56.835 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:14:56.835 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:14:56.835 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 235], 00:14:56.835 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 420], 99.95th=[ 502], 00:14:56.835 | 99.99th=[ 955] 00:14:56.835 write: IOPS=811, BW=3244KiB/s (3322kB/s)(190MiB/60000msec); 0 zone resets 00:14:56.835 slat (usec): min=12, max=521, avg=20.88, stdev= 5.90 00:14:56.835 clat (usec): min=117, max=7159, avg=153.86, stdev=41.32 00:14:56.835 lat (usec): min=135, max=7179, avg=174.74, stdev=41.88 00:14:56.835 clat percentiles (usec): 00:14:56.835 | 1.00th=[ 125], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 139], 00:14:56.835 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:14:56.835 | 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 186], 00:14:56.835 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 367], 99.95th=[ 498], 00:14:56.835 | 99.99th=[ 914] 00:14:56.835 bw ( KiB/s): min= 5144, max=12288, per=100.00%, avg=10023.05, stdev=1506.41, samples=38 00:14:56.835 iops : min= 1286, max= 3072, avg=2505.76, stdev=376.60, samples=38 00:14:56.835 lat (usec) : 250=99.19%, 500=0.76%, 750=0.04%, 1000=0.01% 00:14:56.835 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, >=2000=0.01% 00:14:56.835 cpu : usr=0.58%, sys=2.24%, ctx=97324, majf=0, minf=2 00:14:56.835 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:56.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.835 issued rwts: total=48640,48666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.835 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:56.835 00:14:56.835 Run status group 0 (all jobs): 00:14:56.835 READ: bw=3243KiB/s (3320kB/s), 3243KiB/s-3243KiB/s (3320kB/s-3320kB/s), io=190MiB (199MB), run=60000-60000msec 00:14:56.835 WRITE: bw=3244KiB/s (3322kB/s), 3244KiB/s-3244KiB/s (3322kB/s-3322kB/s), io=190MiB (199MB), run=60000-60000msec 00:14:56.835 00:14:56.835 Disk stats (read/write): 00:14:56.835 nvme0n1: ios=48543/48640, merge=0/0, ticks=10199/8056, in_queue=18255, util=99.55% 00:14:56.835 06:39:34 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.835 06:39:34 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:56.835 06:39:34 -- common/autotest_common.sh@1198 -- # local i=0 00:14:56.835 06:39:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:56.835 06:39:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.835 06:39:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:56.835 06:39:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.835 06:39:34 -- common/autotest_common.sh@1210 -- # return 0 00:14:56.835 06:39:34 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:14:56.835 06:39:34 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:14:56.835 nvmf hotplug test: fio successful as expected 00:14:56.835 06:39:34 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.835 06:39:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.835 06:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:56.835 06:39:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.835 06:39:34 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:14:56.835 06:39:34 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:14:56.835 06:39:34 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:14:56.835 06:39:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:56.835 06:39:34 -- nvmf/common.sh@116 -- # sync 00:14:56.835 06:39:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:56.835 06:39:34 -- nvmf/common.sh@119 -- # set +e 00:14:56.835 06:39:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:56.835 06:39:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:56.835 rmmod nvme_tcp 00:14:56.835 rmmod nvme_fabrics 00:14:56.835 rmmod nvme_keyring 00:14:56.835 06:39:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:56.835 06:39:34 -- nvmf/common.sh@123 -- # set -e 00:14:56.835 06:39:34 -- nvmf/common.sh@124 -- # return 0 00:14:56.835 06:39:34 -- nvmf/common.sh@477 -- # '[' -n 79119 ']' 00:14:56.835 06:39:34 -- nvmf/common.sh@478 -- # killprocess 79119 00:14:56.835 06:39:34 -- common/autotest_common.sh@926 -- # '[' -z 79119 ']' 00:14:56.835 06:39:34 -- common/autotest_common.sh@930 -- # kill -0 79119 00:14:56.835 06:39:34 -- common/autotest_common.sh@931 -- # uname 00:14:56.835 06:39:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:56.835 06:39:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79119 00:14:56.835 killing process with pid 79119 00:14:56.835 06:39:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:56.835 06:39:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:56.835 06:39:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79119' 00:14:56.835 06:39:34 -- common/autotest_common.sh@945 -- # kill 79119 00:14:56.835 06:39:34 -- common/autotest_common.sh@950 -- # wait 79119 00:14:56.835 06:39:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:56.835 06:39:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:56.835 06:39:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:56.835 06:39:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.835 06:39:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:56.835 06:39:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.835 06:39:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.835 06:39:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.835 06:39:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:56.835 ************************************ 00:14:56.835 END TEST nvmf_initiator_timeout 00:14:56.835 ************************************ 00:14:56.835 00:14:56.835 real 1m4.483s 00:14:56.835 user 3m53.760s 00:14:56.835 sys 0m21.203s 00:14:56.835 06:39:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.835 06:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:56.835 06:39:34 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:14:56.835 06:39:34 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:56.835 06:39:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:56.835 06:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:56.835 06:39:35 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:56.835 06:39:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:56.835 06:39:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.835 06:39:35 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:56.835 06:39:35 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:56.835 06:39:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:56.836 06:39:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:56.836 06:39:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.836 ************************************ 00:14:56.836 START TEST nvmf_identify 00:14:56.836 ************************************ 00:14:56.836 06:39:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:56.836 * Looking for test storage... 00:14:56.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:56.836 06:39:35 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.836 06:39:35 -- nvmf/common.sh@7 -- # uname -s 00:14:56.836 06:39:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.836 06:39:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.836 06:39:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.836 06:39:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.836 06:39:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.836 06:39:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.836 06:39:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.836 06:39:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.836 06:39:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.836 06:39:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.836 06:39:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:14:56.836 06:39:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:14:56.836 06:39:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.836 06:39:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.836 06:39:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.836 06:39:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.836 06:39:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.836 06:39:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.836 06:39:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.836 06:39:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.836 06:39:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.836 06:39:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.836 06:39:35 -- paths/export.sh@5 -- # export PATH 00:14:56.836 06:39:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.836 06:39:35 -- nvmf/common.sh@46 -- # : 0 00:14:56.836 06:39:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:56.836 06:39:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:56.836 06:39:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:56.836 06:39:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.836 06:39:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.836 06:39:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:56.836 06:39:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:56.836 06:39:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:56.836 06:39:35 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:56.836 06:39:35 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:56.836 06:39:35 -- host/identify.sh@14 -- # nvmftestinit 00:14:56.836 06:39:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:56.836 06:39:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.836 06:39:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:56.836 06:39:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:56.836 06:39:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:56.836 06:39:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.836 06:39:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.836 06:39:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.836 06:39:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:56.836 06:39:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:56.836 06:39:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:56.836 06:39:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:56.836 06:39:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:56.836 06:39:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:56.836 06:39:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.836 06:39:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.836 06:39:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.836 06:39:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:56.836 06:39:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.836 06:39:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.836 06:39:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.836 06:39:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.836 06:39:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.836 06:39:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.836 06:39:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.836 06:39:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.836 06:39:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:56.836 06:39:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:56.836 Cannot find device "nvmf_tgt_br" 00:14:56.836 06:39:35 -- nvmf/common.sh@154 -- # true 00:14:56.836 06:39:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.836 Cannot find device "nvmf_tgt_br2" 00:14:56.836 06:39:35 -- nvmf/common.sh@155 -- # true 00:14:56.836 06:39:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:56.836 06:39:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:56.836 Cannot find device "nvmf_tgt_br" 00:14:56.836 06:39:35 -- nvmf/common.sh@157 -- # true 00:14:56.836 06:39:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:56.836 Cannot find device "nvmf_tgt_br2" 00:14:56.836 06:39:35 -- nvmf/common.sh@158 -- # true 00:14:56.836 06:39:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:56.836 06:39:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:56.836 06:39:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.836 06:39:35 -- nvmf/common.sh@161 -- # true 00:14:56.836 06:39:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.836 06:39:35 -- nvmf/common.sh@162 -- # true 00:14:56.836 06:39:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.836 06:39:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.836 06:39:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.836 06:39:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.836 06:39:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.836 06:39:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.836 06:39:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.836 06:39:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:56.836 06:39:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:56.836 06:39:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:56.836 06:39:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:56.836 06:39:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:56.836 06:39:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:56.836 06:39:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.836 06:39:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.836 06:39:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.836 06:39:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:56.836 06:39:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:56.836 06:39:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.836 06:39:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.836 06:39:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.836 06:39:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.836 06:39:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.836 06:39:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:56.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:14:56.836 00:14:56.836 --- 10.0.0.2 ping statistics --- 00:14:56.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.836 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:14:56.836 06:39:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:56.836 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.836 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:56.836 00:14:56.836 --- 10.0.0.3 ping statistics --- 00:14:56.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.836 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:56.836 06:39:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:56.836 00:14:56.836 --- 10.0.0.1 ping statistics --- 00:14:56.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.836 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:56.836 06:39:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.836 06:39:35 -- nvmf/common.sh@421 -- # return 0 00:14:56.836 06:39:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:56.836 06:39:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.836 06:39:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:56.836 06:39:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:56.836 06:39:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.836 06:39:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:56.836 06:39:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:56.836 06:39:35 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:56.836 06:39:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:56.836 06:39:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.836 06:39:35 -- host/identify.sh@19 -- # nvmfpid=80052 00:14:56.836 06:39:35 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.836 06:39:35 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:56.836 06:39:35 -- host/identify.sh@23 -- # waitforlisten 80052 00:14:56.836 06:39:35 -- common/autotest_common.sh@819 -- # '[' -z 80052 ']' 00:14:56.836 06:39:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.836 06:39:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:56.836 06:39:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.836 06:39:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:56.836 06:39:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.836 [2024-07-12 06:39:35.537875] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:56.836 [2024-07-12 06:39:35.538192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.836 [2024-07-12 06:39:35.677717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.836 [2024-07-12 06:39:35.711144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:56.836 [2024-07-12 06:39:35.711564] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.836 [2024-07-12 06:39:35.711617] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.836 [2024-07-12 06:39:35.711741] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.836 [2024-07-12 06:39:35.711935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.836 [2024-07-12 06:39:35.712747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.836 [2024-07-12 06:39:35.712870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.836 [2024-07-12 06:39:35.712874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.836 06:39:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:56.836 06:39:36 -- common/autotest_common.sh@852 -- # return 0 00:14:56.836 06:39:36 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:56.836 06:39:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.836 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:56.836 [2024-07-12 06:39:36.529149] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.836 06:39:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.836 06:39:36 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:56.836 06:39:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:56.836 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:56.836 06:39:36 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:56.836 06:39:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.836 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:56.836 Malloc0 00:14:56.836 06:39:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.836 06:39:36 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:56.836 06:39:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.836 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:56.836 06:39:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.836 06:39:36 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:56.836 06:39:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.836 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:56.836 06:39:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.836 06:39:36 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.836 06:39:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.836 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:56.836 [2024-07-12 06:39:36.626922] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.836 06:39:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.836 06:39:36 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:56.836 06:39:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.836 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:56.836 06:39:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.836 06:39:36 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:56.836 06:39:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.836 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:56.836 [2024-07-12 06:39:36.642718] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:56.836 [ 00:14:56.836 { 00:14:56.836 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:56.836 "subtype": "Discovery", 00:14:56.836 "listen_addresses": [ 00:14:56.836 { 00:14:56.836 "transport": "TCP", 00:14:56.836 "trtype": "TCP", 00:14:56.836 "adrfam": "IPv4", 00:14:56.836 "traddr": "10.0.0.2", 00:14:56.836 "trsvcid": "4420" 00:14:56.836 } 00:14:56.836 ], 00:14:56.836 "allow_any_host": true, 00:14:56.836 "hosts": [] 00:14:56.836 }, 00:14:56.836 { 00:14:56.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.836 "subtype": "NVMe", 00:14:56.836 "listen_addresses": [ 00:14:56.836 { 00:14:56.836 "transport": "TCP", 00:14:56.836 "trtype": "TCP", 00:14:56.836 "adrfam": "IPv4", 00:14:56.836 "traddr": "10.0.0.2", 00:14:56.836 "trsvcid": "4420" 00:14:56.836 } 00:14:56.836 ], 00:14:56.836 "allow_any_host": true, 00:14:56.836 "hosts": [], 00:14:56.836 "serial_number": "SPDK00000000000001", 00:14:56.836 "model_number": "SPDK bdev Controller", 00:14:56.836 "max_namespaces": 32, 00:14:56.836 "min_cntlid": 1, 00:14:56.836 "max_cntlid": 65519, 00:14:56.836 "namespaces": [ 00:14:56.836 { 00:14:56.836 "nsid": 1, 00:14:56.836 "bdev_name": "Malloc0", 00:14:56.836 "name": "Malloc0", 00:14:56.836 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:56.836 "eui64": "ABCDEF0123456789", 00:14:56.836 "uuid": "d4c7e807-7e2d-4320-91a6-31fc8e8a1e6d" 00:14:56.836 } 00:14:56.836 ] 00:14:56.836 } 00:14:56.836 ] 00:14:56.836 06:39:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.836 06:39:36 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:56.836 [2024-07-12 06:39:36.685084] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:56.836 [2024-07-12 06:39:36.685128] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80087 ] 00:14:57.099 [2024-07-12 06:39:36.824063] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:57.099 [2024-07-12 06:39:36.824155] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:57.099 [2024-07-12 06:39:36.824163] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:57.099 [2024-07-12 06:39:36.824175] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:57.099 [2024-07-12 06:39:36.824187] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:57.099 [2024-07-12 06:39:36.824313] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:57.099 [2024-07-12 06:39:36.824404] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x186ad70 0 00:14:57.099 [2024-07-12 06:39:36.836989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:57.099 [2024-07-12 06:39:36.837014] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:57.099 [2024-07-12 06:39:36.837036] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:57.099 [2024-07-12 06:39:36.837040] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:57.099 [2024-07-12 06:39:36.837096] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.099 [2024-07-12 06:39:36.837106] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.099 [2024-07-12 06:39:36.837111] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x186ad70) 00:14:57.099 [2024-07-12 06:39:36.837125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:57.099 [2024-07-12 06:39:36.837156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b45f0, cid 0, qid 0 00:14:57.099 [2024-07-12 06:39:36.845017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.099 [2024-07-12 06:39:36.845040] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.099 [2024-07-12 06:39:36.845061] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.099 [2024-07-12 06:39:36.845066] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b45f0) on tqpair=0x186ad70 00:14:57.099 [2024-07-12 06:39:36.845079] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:57.099 [2024-07-12 06:39:36.845087] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:57.099 [2024-07-12 06:39:36.845109] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:57.099 [2024-07-12 06:39:36.845125] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.099 [2024-07-12 06:39:36.845130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.099 [2024-07-12 06:39:36.845134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x186ad70) 00:14:57.099 [2024-07-12 06:39:36.845143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.099 [2024-07-12 06:39:36.845170] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b45f0, cid 0, qid 0 00:14:57.099 [2024-07-12 06:39:36.845251] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.099 [2024-07-12 06:39:36.845258] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.099 [2024-07-12 06:39:36.845262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.099 [2024-07-12 06:39:36.845266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b45f0) on tqpair=0x186ad70 00:14:57.099 [2024-07-12 06:39:36.845273] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:57.099 [2024-07-12 06:39:36.845281] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:57.099 [2024-07-12 06:39:36.845305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.099 [2024-07-12 06:39:36.845309] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.099 [2024-07-12 06:39:36.845313] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x186ad70) 00:14:57.099 [2024-07-12 06:39:36.845321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.100 [2024-07-12 06:39:36.845340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b45f0, cid 0, qid 0 00:14:57.100 [2024-07-12 06:39:36.845394] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.100 [2024-07-12 06:39:36.845401] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.100 [2024-07-12 06:39:36.845405] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845409] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b45f0) on tqpair=0x186ad70 00:14:57.100 [2024-07-12 06:39:36.845417] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:57.100 [2024-07-12 06:39:36.845426] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:57.100 [2024-07-12 06:39:36.845433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845438] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845442] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x186ad70) 00:14:57.100 [2024-07-12 06:39:36.845449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.100 [2024-07-12 06:39:36.845468] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b45f0, cid 0, qid 0 00:14:57.100 [2024-07-12 06:39:36.845515] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.100 [2024-07-12 06:39:36.845521] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.100 [2024-07-12 06:39:36.845525] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845545] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b45f0) on tqpair=0x186ad70 00:14:57.100 [2024-07-12 06:39:36.845553] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:57.100 [2024-07-12 06:39:36.845563] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845568] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x186ad70) 00:14:57.100 [2024-07-12 06:39:36.845579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.100 [2024-07-12 06:39:36.845602] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b45f0, cid 0, qid 0 00:14:57.100 [2024-07-12 06:39:36.845653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.100 [2024-07-12 06:39:36.845660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.100 [2024-07-12 06:39:36.845664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845668] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b45f0) on tqpair=0x186ad70 00:14:57.100 [2024-07-12 06:39:36.845674] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:57.100 [2024-07-12 06:39:36.845679] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:57.100 [2024-07-12 06:39:36.845687] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:57.100 [2024-07-12 06:39:36.845793] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:57.100 [2024-07-12 06:39:36.845799] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:57.100 [2024-07-12 06:39:36.845808] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845812] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845816] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x186ad70) 00:14:57.100 [2024-07-12 06:39:36.845823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.100 [2024-07-12 06:39:36.845841] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b45f0, cid 0, qid 0 00:14:57.100 [2024-07-12 06:39:36.845888] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.100 [2024-07-12 06:39:36.845895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.100 [2024-07-12 06:39:36.845899] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845903] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b45f0) on tqpair=0x186ad70 00:14:57.100 [2024-07-12 06:39:36.845909] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:57.100 [2024-07-12 06:39:36.845919] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845924] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.845927] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x186ad70) 00:14:57.100 [2024-07-12 06:39:36.845935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.100 [2024-07-12 06:39:36.845951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b45f0, cid 0, qid 0 00:14:57.100 [2024-07-12 06:39:36.846011] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.100 [2024-07-12 06:39:36.846018] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.100 [2024-07-12 06:39:36.846022] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846027] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b45f0) on tqpair=0x186ad70 00:14:57.100 [2024-07-12 06:39:36.846033] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:57.100 [2024-07-12 06:39:36.846039] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:57.100 [2024-07-12 06:39:36.846073] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:57.100 [2024-07-12 06:39:36.846091] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:57.100 [2024-07-12 06:39:36.846102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846106] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x186ad70) 00:14:57.100 [2024-07-12 06:39:36.846118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.100 [2024-07-12 06:39:36.846139] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b45f0, cid 0, qid 0 00:14:57.100 [2024-07-12 06:39:36.846233] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.100 [2024-07-12 06:39:36.846241] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.100 [2024-07-12 06:39:36.846245] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846249] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x186ad70): datao=0, datal=4096, cccid=0 00:14:57.100 [2024-07-12 06:39:36.846255] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b45f0) on tqpair(0x186ad70): expected_datao=0, payload_size=4096 00:14:57.100 [2024-07-12 06:39:36.846264] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846269] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.100 [2024-07-12 06:39:36.846285] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.100 [2024-07-12 06:39:36.846289] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846293] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b45f0) on tqpair=0x186ad70 00:14:57.100 [2024-07-12 06:39:36.846315] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:57.100 [2024-07-12 06:39:36.846326] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:57.100 [2024-07-12 06:39:36.846331] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:57.100 [2024-07-12 06:39:36.846337] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:57.100 [2024-07-12 06:39:36.846342] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:57.100 [2024-07-12 06:39:36.846348] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:57.100 [2024-07-12 06:39:36.846365] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:57.100 [2024-07-12 06:39:36.846374] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846379] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846383] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x186ad70) 00:14:57.100 [2024-07-12 06:39:36.846392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:57.100 [2024-07-12 06:39:36.846417] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b45f0, cid 0, qid 0 00:14:57.100 [2024-07-12 06:39:36.846487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.100 [2024-07-12 06:39:36.846494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.100 [2024-07-12 06:39:36.846498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b45f0) on tqpair=0x186ad70 00:14:57.100 [2024-07-12 06:39:36.846511] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846516] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x186ad70) 00:14:57.100 [2024-07-12 06:39:36.846526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.100 [2024-07-12 06:39:36.846533] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846537] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.100 [2024-07-12 06:39:36.846541] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x186ad70) 00:14:57.101 [2024-07-12 06:39:36.846547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.101 [2024-07-12 06:39:36.846555] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.846559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.846562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x186ad70) 00:14:57.101 [2024-07-12 06:39:36.846569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.101 [2024-07-12 06:39:36.846575] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.846579] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.846583] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x186ad70) 00:14:57.101 [2024-07-12 06:39:36.846589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.101 [2024-07-12 06:39:36.846595] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:57.101 [2024-07-12 06:39:36.846634] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:57.101 [2024-07-12 06:39:36.846647] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.846653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.846660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x186ad70) 00:14:57.101 [2024-07-12 06:39:36.846671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.101 [2024-07-12 06:39:36.846694] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b45f0, cid 0, qid 0 00:14:57.101 [2024-07-12 06:39:36.846702] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4750, cid 1, qid 0 00:14:57.101 [2024-07-12 06:39:36.846707] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b48b0, cid 2, qid 0 00:14:57.101 [2024-07-12 06:39:36.846712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4a10, cid 3, qid 0 00:14:57.101 [2024-07-12 06:39:36.846717] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4b70, cid 4, qid 0 00:14:57.101 [2024-07-12 06:39:36.846818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.101 [2024-07-12 06:39:36.846825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.101 [2024-07-12 06:39:36.846829] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.846833] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4b70) on tqpair=0x186ad70 00:14:57.101 [2024-07-12 06:39:36.846840] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:57.101 [2024-07-12 06:39:36.846846] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:57.101 [2024-07-12 06:39:36.846858] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.846863] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.846867] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x186ad70) 00:14:57.101 [2024-07-12 06:39:36.846875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.101 [2024-07-12 06:39:36.846893] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4b70, cid 4, qid 0 00:14:57.101 [2024-07-12 06:39:36.846968] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.101 [2024-07-12 06:39:36.846981] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.101 [2024-07-12 06:39:36.846986] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.846991] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x186ad70): datao=0, datal=4096, cccid=4 00:14:57.101 [2024-07-12 06:39:36.846996] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b4b70) on tqpair(0x186ad70): expected_datao=0, payload_size=4096 00:14:57.101 [2024-07-12 06:39:36.847005] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847009] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.101 [2024-07-12 06:39:36.847025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.101 [2024-07-12 06:39:36.847029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4b70) on tqpair=0x186ad70 00:14:57.101 [2024-07-12 06:39:36.847048] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:57.101 [2024-07-12 06:39:36.847076] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847082] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847086] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x186ad70) 00:14:57.101 [2024-07-12 06:39:36.847094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.101 [2024-07-12 06:39:36.847101] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847105] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847109] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x186ad70) 00:14:57.101 [2024-07-12 06:39:36.847116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.101 [2024-07-12 06:39:36.847152] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4b70, cid 4, qid 0 00:14:57.101 [2024-07-12 06:39:36.847160] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4cd0, cid 5, qid 0 00:14:57.101 [2024-07-12 06:39:36.847270] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.101 [2024-07-12 06:39:36.847277] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.101 [2024-07-12 06:39:36.847281] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847285] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x186ad70): datao=0, datal=1024, cccid=4 00:14:57.101 [2024-07-12 06:39:36.847290] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b4b70) on tqpair(0x186ad70): expected_datao=0, payload_size=1024 00:14:57.101 [2024-07-12 06:39:36.847298] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847302] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847309] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.101 [2024-07-12 06:39:36.847315] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.101 [2024-07-12 06:39:36.847318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847323] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4cd0) on tqpair=0x186ad70 00:14:57.101 [2024-07-12 06:39:36.847342] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.101 [2024-07-12 06:39:36.847350] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.101 [2024-07-12 06:39:36.847354] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847358] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4b70) on tqpair=0x186ad70 00:14:57.101 [2024-07-12 06:39:36.847371] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.101 [2024-07-12 06:39:36.847380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x186ad70) 00:14:57.101 [2024-07-12 06:39:36.847387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.101 [2024-07-12 06:39:36.847410] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4b70, cid 4, qid 0 00:14:57.101 [2024-07-12 06:39:36.847487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.101 [2024-07-12 06:39:36.847494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.102 [2024-07-12 06:39:36.847498] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.102 [2024-07-12 06:39:36.847502] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x186ad70): datao=0, datal=3072, cccid=4 00:14:57.102 [2024-07-12 06:39:36.847507] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b4b70) on tqpair(0x186ad70): expected_datao=0, payload_size=3072 00:14:57.102 [2024-07-12 06:39:36.847515] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.102 [2024-07-12 06:39:36.847520] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.102 [2024-07-12 06:39:36.847528] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.102 [2024-07-12 06:39:36.847534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.102 [2024-07-12 06:39:36.847538] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.102 [2024-07-12 06:39:36.847542] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4b70) on tqpair=0x186ad70 00:14:57.102 [2024-07-12 06:39:36.847553] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.102 [2024-07-12 06:39:36.847558] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.102 [2024-07-12 06:39:36.847562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x186ad70) 00:14:57.102 [2024-07-12 06:39:36.847569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.102 [2024-07-12 06:39:36.847592] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4b70, cid 4, qid 0 00:14:57.102 [2024-07-12 06:39:36.847667] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.102 [2024-07-12 06:39:36.847674] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.102 [2024-07-12 06:39:36.847678] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.102 [2024-07-12 06:39:36.847682] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x186ad70): datao=0, datal=8, cccid=4 00:14:57.102 [2024-07-12 06:39:36.847687] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b4b70) on tqpair(0x186ad70): expected_datao=0, payload_size=8 00:14:57.102 [2024-07-12 06:39:36.847695] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.102 [2024-07-12 06:39:36.847699] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.102 [2024-07-12 06:39:36.847714] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.102 [2024-07-12 06:39:36.847721] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.102 [2024-07-12 06:39:36.847725] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.102 ===================================================== 00:14:57.102 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:57.102 ===================================================== 00:14:57.102 Controller Capabilities/Features 00:14:57.102 ================================ 00:14:57.102 Vendor ID: 0000 00:14:57.102 Subsystem Vendor ID: 0000 00:14:57.102 Serial Number: .................... 00:14:57.102 Model Number: ........................................ 00:14:57.102 Firmware Version: 24.01.1 00:14:57.102 Recommended Arb Burst: 0 00:14:57.102 IEEE OUI Identifier: 00 00 00 00:14:57.102 Multi-path I/O 00:14:57.102 May have multiple subsystem ports: No 00:14:57.102 May have multiple controllers: No 00:14:57.102 Associated with SR-IOV VF: No 00:14:57.102 Max Data Transfer Size: 131072 00:14:57.102 Max Number of Namespaces: 0 00:14:57.102 Max Number of I/O Queues: 1024 00:14:57.102 NVMe Specification Version (VS): 1.3 00:14:57.102 NVMe Specification Version (Identify): 1.3 00:14:57.102 Maximum Queue Entries: 128 00:14:57.102 Contiguous Queues Required: Yes 00:14:57.102 Arbitration Mechanisms Supported 00:14:57.102 Weighted Round Robin: Not Supported 00:14:57.102 Vendor Specific: Not Supported 00:14:57.102 Reset Timeout: 15000 ms 00:14:57.102 Doorbell Stride: 4 bytes 00:14:57.102 NVM Subsystem Reset: Not Supported 00:14:57.102 Command Sets Supported 00:14:57.102 NVM Command Set: Supported 00:14:57.102 Boot Partition: Not Supported 00:14:57.102 Memory Page Size Minimum: 4096 bytes 00:14:57.102 Memory Page Size Maximum: 4096 bytes 00:14:57.102 Persistent Memory Region: Not Supported 00:14:57.102 Optional Asynchronous Events Supported 00:14:57.102 Namespace Attribute Notices: Not Supported 00:14:57.102 Firmware Activation Notices: Not Supported 00:14:57.102 ANA Change Notices: Not Supported 00:14:57.102 PLE Aggregate Log Change Notices: Not Supported 00:14:57.102 LBA Status Info Alert Notices: Not Supported 00:14:57.102 EGE Aggregate Log Change Notices: Not Supported 00:14:57.102 Normal NVM Subsystem Shutdown event: Not Supported 00:14:57.102 Zone Descriptor Change Notices: Not Supported 00:14:57.102 Discovery Log Change Notices: Supported 00:14:57.102 Controller Attributes 00:14:57.102 128-bit Host Identifier: Not Supported 00:14:57.102 Non-Operational Permissive Mode: Not Supported 00:14:57.102 NVM Sets: Not Supported 00:14:57.102 Read Recovery Levels: Not Supported 00:14:57.102 Endurance Groups: Not Supported 00:14:57.102 Predictable Latency Mode: Not Supported 00:14:57.102 Traffic Based Keep ALive: Not Supported 00:14:57.102 Namespace Granularity: Not Supported 00:14:57.102 SQ Associations: Not Supported 00:14:57.102 UUID List: Not Supported 00:14:57.102 Multi-Domain Subsystem: Not Supported 00:14:57.102 Fixed Capacity Management: Not Supported 00:14:57.102 Variable Capacity Management: Not Supported 00:14:57.102 Delete Endurance Group: Not Supported 00:14:57.102 Delete NVM Set: Not Supported 00:14:57.102 Extended LBA Formats Supported: Not Supported 00:14:57.102 Flexible Data Placement Supported: Not Supported 00:14:57.102 00:14:57.102 Controller Memory Buffer Support 00:14:57.102 ================================ 00:14:57.102 Supported: No 00:14:57.102 00:14:57.102 Persistent Memory Region Support 00:14:57.102 ================================ 00:14:57.102 Supported: No 00:14:57.102 00:14:57.102 Admin Command Set Attributes 00:14:57.102 ============================ 00:14:57.102 Security Send/Receive: Not Supported 00:14:57.102 Format NVM: Not Supported 00:14:57.102 Firmware Activate/Download: Not Supported 00:14:57.102 Namespace Management: Not Supported 00:14:57.102 Device Self-Test: Not Supported 00:14:57.102 Directives: Not Supported 00:14:57.102 NVMe-MI: Not Supported 00:14:57.102 Virtualization Management: Not Supported 00:14:57.102 Doorbell Buffer Config: Not Supported 00:14:57.102 Get LBA Status Capability: Not Supported 00:14:57.102 Command & Feature Lockdown Capability: Not Supported 00:14:57.102 Abort Command Limit: 1 00:14:57.102 Async Event Request Limit: 4 00:14:57.103 Number of Firmware Slots: N/A 00:14:57.103 Firmware Slot 1 Read-Only: N/A 00:14:57.103 Firmware Activation Without Reset: N/A 00:14:57.103 Multiple Update Detection Support: N/A 00:14:57.103 Firmware Update Granularity: No Information Provided 00:14:57.103 Per-Namespace SMART Log: No 00:14:57.103 Asymmetric Namespace Access Log Page: Not Supported 00:14:57.103 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:57.103 Command Effects Log Page: Not Supported 00:14:57.103 Get Log Page Extended Data: Supported 00:14:57.103 Telemetry Log Pages: Not Supported 00:14:57.103 Persistent Event Log Pages: Not Supported 00:14:57.103 Supported Log Pages Log Page: May Support 00:14:57.103 Commands Supported & Effects Log Page: Not Supported 00:14:57.103 Feature Identifiers & Effects Log Page:May Support 00:14:57.103 NVMe-MI Commands & Effects Log Page: May Support 00:14:57.103 Data Area 4 for Telemetry Log: Not Supported 00:14:57.103 Error Log Page Entries Supported: 128 00:14:57.103 Keep Alive: Not Supported 00:14:57.103 00:14:57.103 NVM Command Set Attributes 00:14:57.103 ========================== 00:14:57.103 Submission Queue Entry Size 00:14:57.103 Max: 1 00:14:57.103 Min: 1 00:14:57.103 Completion Queue Entry Size 00:14:57.103 Max: 1 00:14:57.103 Min: 1 00:14:57.103 Number of Namespaces: 0 00:14:57.103 Compare Command: Not Supported 00:14:57.103 Write Uncorrectable Command: Not Supported 00:14:57.103 Dataset Management Command: Not Supported 00:14:57.103 Write Zeroes Command: Not Supported 00:14:57.103 Set Features Save Field: Not Supported 00:14:57.103 Reservations: Not Supported 00:14:57.103 Timestamp: Not Supported 00:14:57.103 Copy: Not Supported 00:14:57.103 Volatile Write Cache: Not Present 00:14:57.103 Atomic Write Unit (Normal): 1 00:14:57.103 Atomic Write Unit (PFail): 1 00:14:57.103 Atomic Compare & Write Unit: 1 00:14:57.103 Fused Compare & Write: Supported 00:14:57.103 Scatter-Gather List 00:14:57.103 SGL Command Set: Supported 00:14:57.103 SGL Keyed: Supported 00:14:57.103 SGL Bit Bucket Descriptor: Not Supported 00:14:57.103 SGL Metadata Pointer: Not Supported 00:14:57.103 Oversized SGL: Not Supported 00:14:57.103 SGL Metadata Address: Not Supported 00:14:57.103 SGL Offset: Supported 00:14:57.103 Transport SGL Data Block: Not Supported 00:14:57.103 Replay Protected Memory Block: Not Supported 00:14:57.103 00:14:57.103 Firmware Slot Information 00:14:57.103 ========================= 00:14:57.103 Active slot: 0 00:14:57.103 00:14:57.103 00:14:57.103 Error Log 00:14:57.103 ========= 00:14:57.103 00:14:57.103 Active Namespaces 00:14:57.103 ================= 00:14:57.103 Discovery Log Page 00:14:57.103 ================== 00:14:57.103 Generation Counter: 2 00:14:57.103 Number of Records: 2 00:14:57.103 Record Format: 0 00:14:57.103 00:14:57.103 Discovery Log Entry 0 00:14:57.103 ---------------------- 00:14:57.103 Transport Type: 3 (TCP) 00:14:57.103 Address Family: 1 (IPv4) 00:14:57.103 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:57.103 Entry Flags: 00:14:57.103 Duplicate Returned Information: 1 00:14:57.103 Explicit Persistent Connection Support for Discovery: 1 00:14:57.103 Transport Requirements: 00:14:57.103 Secure Channel: Not Required 00:14:57.103 Port ID: 0 (0x0000) 00:14:57.103 Controller ID: 65535 (0xffff) 00:14:57.103 Admin Max SQ Size: 128 00:14:57.103 Transport Service Identifier: 4420 00:14:57.103 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:57.103 Transport Address: 10.0.0.2 00:14:57.103 Discovery Log Entry 1 00:14:57.103 ---------------------- 00:14:57.103 Transport Type: 3 (TCP) 00:14:57.103 Address Family: 1 (IPv4) 00:14:57.103 Subsystem Type: 2 (NVM Subsystem) 00:14:57.103 Entry Flags: 00:14:57.103 Duplicate Returned Information: 0 00:14:57.103 Explicit Persistent Connection Support for Discovery: 0 00:14:57.103 Transport Requirements: 00:14:57.103 Secure Channel: Not Required 00:14:57.103 Port ID: 0 (0x0000) 00:14:57.103 Controller ID: 65535 (0xffff) 00:14:57.103 Admin Max SQ Size: 128 00:14:57.103 Transport Service Identifier: 4420 00:14:57.103 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:57.103 Transport Address: 10.0.0.2 [2024-07-12 06:39:36.847729] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4b70) on tqpair=0x186ad70 00:14:57.103 [2024-07-12 06:39:36.847853] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:57.103 [2024-07-12 06:39:36.847875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.103 [2024-07-12 06:39:36.847883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.103 [2024-07-12 06:39:36.847890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.103 [2024-07-12 06:39:36.847897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.103 [2024-07-12 06:39:36.847907] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.103 [2024-07-12 06:39:36.847912] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.103 [2024-07-12 06:39:36.847916] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x186ad70) 00:14:57.103 [2024-07-12 06:39:36.847924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.103 [2024-07-12 06:39:36.847951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4a10, cid 3, qid 0 00:14:57.103 [2024-07-12 06:39:36.848026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.103 [2024-07-12 06:39:36.848034] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.103 [2024-07-12 06:39:36.848038] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.103 [2024-07-12 06:39:36.848042] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4a10) on tqpair=0x186ad70 00:14:57.103 [2024-07-12 06:39:36.848051] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.103 [2024-07-12 06:39:36.848056] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.103 [2024-07-12 06:39:36.848060] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x186ad70) 00:14:57.103 [2024-07-12 06:39:36.848068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.103 [2024-07-12 06:39:36.848093] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4a10, cid 3, qid 0 00:14:57.103 [2024-07-12 06:39:36.848179] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.103 [2024-07-12 06:39:36.848186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.103 [2024-07-12 06:39:36.848190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.103 [2024-07-12 06:39:36.848194] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4a10) on tqpair=0x186ad70 00:14:57.103 [2024-07-12 06:39:36.848200] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:57.103 [2024-07-12 06:39:36.848205] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:57.103 [2024-07-12 06:39:36.848216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.103 [2024-07-12 06:39:36.848220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.103 [2024-07-12 06:39:36.848224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x186ad70) 00:14:57.103 [2024-07-12 06:39:36.848231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.103 [2024-07-12 06:39:36.848249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4a10, cid 3, qid 0 00:14:57.103 [2024-07-12 06:39:36.848294] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.103 [2024-07-12 06:39:36.848300] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.103 [2024-07-12 06:39:36.848304] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.103 [2024-07-12 06:39:36.848308] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4a10) on tqpair=0x186ad70 00:14:57.103 [2024-07-12 06:39:36.848320] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.103 [2024-07-12 06:39:36.848325] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848329] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x186ad70) 00:14:57.104 [2024-07-12 06:39:36.848336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.104 [2024-07-12 06:39:36.848353] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4a10, cid 3, qid 0 00:14:57.104 [2024-07-12 06:39:36.848402] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.104 [2024-07-12 06:39:36.848408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.104 [2024-07-12 06:39:36.848412] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848416] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4a10) on tqpair=0x186ad70 00:14:57.104 [2024-07-12 06:39:36.848427] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848432] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848436] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x186ad70) 00:14:57.104 [2024-07-12 06:39:36.848443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.104 [2024-07-12 06:39:36.848460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4a10, cid 3, qid 0 00:14:57.104 [2024-07-12 06:39:36.848511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.104 [2024-07-12 06:39:36.848517] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.104 [2024-07-12 06:39:36.848521] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848525] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4a10) on tqpair=0x186ad70 00:14:57.104 [2024-07-12 06:39:36.848536] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848541] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848545] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x186ad70) 00:14:57.104 [2024-07-12 06:39:36.848552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.104 [2024-07-12 06:39:36.848569] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4a10, cid 3, qid 0 00:14:57.104 [2024-07-12 06:39:36.848619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.104 [2024-07-12 06:39:36.848626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.104 [2024-07-12 06:39:36.848630] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4a10) on tqpair=0x186ad70 00:14:57.104 [2024-07-12 06:39:36.848645] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848650] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848654] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x186ad70) 00:14:57.104 [2024-07-12 06:39:36.848661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.104 [2024-07-12 06:39:36.848677] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4a10, cid 3, qid 0 00:14:57.104 [2024-07-12 06:39:36.848728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.104 [2024-07-12 06:39:36.848735] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.104 [2024-07-12 06:39:36.848739] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4a10) on tqpair=0x186ad70 00:14:57.104 [2024-07-12 06:39:36.848754] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848763] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x186ad70) 00:14:57.104 [2024-07-12 06:39:36.848770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.104 [2024-07-12 06:39:36.848787] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4a10, cid 3, qid 0 00:14:57.104 [2024-07-12 06:39:36.848838] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.104 [2024-07-12 06:39:36.848844] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.104 [2024-07-12 06:39:36.848848] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848852] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4a10) on tqpair=0x186ad70 00:14:57.104 [2024-07-12 06:39:36.848863] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848868] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x186ad70) 00:14:57.104 [2024-07-12 06:39:36.848879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.104 [2024-07-12 06:39:36.848895] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4a10, cid 3, qid 0 00:14:57.104 [2024-07-12 06:39:36.848944] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.104 [2024-07-12 06:39:36.848950] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.104 [2024-07-12 06:39:36.848954] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.848958] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4a10) on tqpair=0x186ad70 00:14:57.104 [2024-07-12 06:39:36.852988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.853013] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.853018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x186ad70) 00:14:57.104 [2024-07-12 06:39:36.853028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.104 [2024-07-12 06:39:36.853059] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4a10, cid 3, qid 0 00:14:57.104 [2024-07-12 06:39:36.853117] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.104 [2024-07-12 06:39:36.853125] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.104 [2024-07-12 06:39:36.853129] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.104 [2024-07-12 06:39:36.853134] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4a10) on tqpair=0x186ad70 00:14:57.104 [2024-07-12 06:39:36.853144] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:14:57.104 00:14:57.104 06:39:36 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:57.104 [2024-07-12 06:39:36.889029] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:57.104 [2024-07-12 06:39:36.889072] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80089 ] 00:14:57.367 [2024-07-12 06:39:37.028522] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:57.367 [2024-07-12 06:39:37.028596] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:57.367 [2024-07-12 06:39:37.028604] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:57.367 [2024-07-12 06:39:37.028617] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:57.367 [2024-07-12 06:39:37.028643] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:57.367 [2024-07-12 06:39:37.028761] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:57.367 [2024-07-12 06:39:37.028831] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2446d70 0 00:14:57.367 [2024-07-12 06:39:37.033030] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:57.367 [2024-07-12 06:39:37.033055] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:57.367 [2024-07-12 06:39:37.033077] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:57.367 [2024-07-12 06:39:37.033082] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:57.367 [2024-07-12 06:39:37.033122] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.367 [2024-07-12 06:39:37.033129] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.367 [2024-07-12 06:39:37.033134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2446d70) 00:14:57.367 [2024-07-12 06:39:37.033147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:57.367 [2024-07-12 06:39:37.033178] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24905f0, cid 0, qid 0 00:14:57.367 [2024-07-12 06:39:37.040117] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.367 [2024-07-12 06:39:37.040140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.368 [2024-07-12 06:39:37.040162] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040167] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24905f0) on tqpair=0x2446d70 00:14:57.368 [2024-07-12 06:39:37.040179] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:57.368 [2024-07-12 06:39:37.040186] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:57.368 [2024-07-12 06:39:37.040193] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:57.368 [2024-07-12 06:39:37.040209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2446d70) 00:14:57.368 [2024-07-12 06:39:37.040228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.368 [2024-07-12 06:39:37.040256] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24905f0, cid 0, qid 0 00:14:57.368 [2024-07-12 06:39:37.040323] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.368 [2024-07-12 06:39:37.040331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.368 [2024-07-12 06:39:37.040335] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040339] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24905f0) on tqpair=0x2446d70 00:14:57.368 [2024-07-12 06:39:37.040346] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:57.368 [2024-07-12 06:39:37.040354] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:57.368 [2024-07-12 06:39:37.040362] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040366] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040370] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2446d70) 00:14:57.368 [2024-07-12 06:39:37.040378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.368 [2024-07-12 06:39:37.040413] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24905f0, cid 0, qid 0 00:14:57.368 [2024-07-12 06:39:37.040482] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.368 [2024-07-12 06:39:37.040489] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.368 [2024-07-12 06:39:37.040493] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24905f0) on tqpair=0x2446d70 00:14:57.368 [2024-07-12 06:39:37.040506] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:57.368 [2024-07-12 06:39:37.040515] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:57.368 [2024-07-12 06:39:37.040523] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040528] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040532] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2446d70) 00:14:57.368 [2024-07-12 06:39:37.040540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.368 [2024-07-12 06:39:37.040559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24905f0, cid 0, qid 0 00:14:57.368 [2024-07-12 06:39:37.040609] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.368 [2024-07-12 06:39:37.040617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.368 [2024-07-12 06:39:37.040621] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040625] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24905f0) on tqpair=0x2446d70 00:14:57.368 [2024-07-12 06:39:37.040633] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:57.368 [2024-07-12 06:39:37.040644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040649] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040653] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2446d70) 00:14:57.368 [2024-07-12 06:39:37.040661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.368 [2024-07-12 06:39:37.040679] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24905f0, cid 0, qid 0 00:14:57.368 [2024-07-12 06:39:37.040733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.368 [2024-07-12 06:39:37.040740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.368 [2024-07-12 06:39:37.040744] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040749] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24905f0) on tqpair=0x2446d70 00:14:57.368 [2024-07-12 06:39:37.040755] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:57.368 [2024-07-12 06:39:37.040761] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:57.368 [2024-07-12 06:39:37.040770] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:57.368 [2024-07-12 06:39:37.040888] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:57.368 [2024-07-12 06:39:37.040893] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:57.368 [2024-07-12 06:39:37.040902] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040907] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.040911] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2446d70) 00:14:57.368 [2024-07-12 06:39:37.040919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.368 [2024-07-12 06:39:37.040937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24905f0, cid 0, qid 0 00:14:57.368 [2024-07-12 06:39:37.040989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.368 [2024-07-12 06:39:37.040996] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.368 [2024-07-12 06:39:37.041000] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041005] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24905f0) on tqpair=0x2446d70 00:14:57.368 [2024-07-12 06:39:37.041012] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:57.368 [2024-07-12 06:39:37.041036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041045] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2446d70) 00:14:57.368 [2024-07-12 06:39:37.041053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.368 [2024-07-12 06:39:37.041073] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24905f0, cid 0, qid 0 00:14:57.368 [2024-07-12 06:39:37.041139] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.368 [2024-07-12 06:39:37.041147] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.368 [2024-07-12 06:39:37.041151] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041156] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24905f0) on tqpair=0x2446d70 00:14:57.368 [2024-07-12 06:39:37.041162] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:57.368 [2024-07-12 06:39:37.041168] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:57.368 [2024-07-12 06:39:37.041177] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:57.368 [2024-07-12 06:39:37.041193] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:57.368 [2024-07-12 06:39:37.041203] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041208] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041212] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2446d70) 00:14:57.368 [2024-07-12 06:39:37.041221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.368 [2024-07-12 06:39:37.041240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24905f0, cid 0, qid 0 00:14:57.368 [2024-07-12 06:39:37.041328] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.368 [2024-07-12 06:39:37.041335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.368 [2024-07-12 06:39:37.041340] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041344] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2446d70): datao=0, datal=4096, cccid=0 00:14:57.368 [2024-07-12 06:39:37.041350] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24905f0) on tqpair(0x2446d70): expected_datao=0, payload_size=4096 00:14:57.368 [2024-07-12 06:39:37.041360] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041365] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.368 [2024-07-12 06:39:37.041381] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.368 [2024-07-12 06:39:37.041385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041390] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24905f0) on tqpair=0x2446d70 00:14:57.368 [2024-07-12 06:39:37.041399] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:57.368 [2024-07-12 06:39:37.041405] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:57.368 [2024-07-12 06:39:37.041410] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:57.368 [2024-07-12 06:39:37.041415] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:57.368 [2024-07-12 06:39:37.041420] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:57.368 [2024-07-12 06:39:37.041426] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:57.368 [2024-07-12 06:39:37.041441] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:57.368 [2024-07-12 06:39:37.041451] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041456] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041460] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2446d70) 00:14:57.368 [2024-07-12 06:39:37.041469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:57.368 [2024-07-12 06:39:37.041490] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24905f0, cid 0, qid 0 00:14:57.368 [2024-07-12 06:39:37.041540] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.368 [2024-07-12 06:39:37.041548] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.368 [2024-07-12 06:39:37.041552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.368 [2024-07-12 06:39:37.041556] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24905f0) on tqpair=0x2446d70 00:14:57.368 [2024-07-12 06:39:37.041566] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041571] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041575] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2446d70) 00:14:57.369 [2024-07-12 06:39:37.041582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.369 [2024-07-12 06:39:37.041589] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041594] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041598] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2446d70) 00:14:57.369 [2024-07-12 06:39:37.041605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.369 [2024-07-12 06:39:37.041611] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041616] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041620] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2446d70) 00:14:57.369 [2024-07-12 06:39:37.041627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.369 [2024-07-12 06:39:37.041633] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041638] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041642] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.369 [2024-07-12 06:39:37.041649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.369 [2024-07-12 06:39:37.041654] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.041668] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.041676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041681] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041685] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2446d70) 00:14:57.369 [2024-07-12 06:39:37.041693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.369 [2024-07-12 06:39:37.041729] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24905f0, cid 0, qid 0 00:14:57.369 [2024-07-12 06:39:37.041737] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490750, cid 1, qid 0 00:14:57.369 [2024-07-12 06:39:37.041742] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24908b0, cid 2, qid 0 00:14:57.369 [2024-07-12 06:39:37.041747] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.369 [2024-07-12 06:39:37.041753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490b70, cid 4, qid 0 00:14:57.369 [2024-07-12 06:39:37.041844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.369 [2024-07-12 06:39:37.041851] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.369 [2024-07-12 06:39:37.041855] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041860] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490b70) on tqpair=0x2446d70 00:14:57.369 [2024-07-12 06:39:37.041867] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:57.369 [2024-07-12 06:39:37.041872] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.041881] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.041892] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.041899] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.041908] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2446d70) 00:14:57.369 [2024-07-12 06:39:37.041916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:57.369 [2024-07-12 06:39:37.041934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490b70, cid 4, qid 0 00:14:57.369 [2024-07-12 06:39:37.042002] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.369 [2024-07-12 06:39:37.042011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.369 [2024-07-12 06:39:37.042015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490b70) on tqpair=0x2446d70 00:14:57.369 [2024-07-12 06:39:37.042082] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.042093] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.042101] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042106] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2446d70) 00:14:57.369 [2024-07-12 06:39:37.042118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.369 [2024-07-12 06:39:37.042139] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490b70, cid 4, qid 0 00:14:57.369 [2024-07-12 06:39:37.042201] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.369 [2024-07-12 06:39:37.042209] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.369 [2024-07-12 06:39:37.042213] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042217] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2446d70): datao=0, datal=4096, cccid=4 00:14:57.369 [2024-07-12 06:39:37.042222] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2490b70) on tqpair(0x2446d70): expected_datao=0, payload_size=4096 00:14:57.369 [2024-07-12 06:39:37.042231] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042235] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042244] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.369 [2024-07-12 06:39:37.042251] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.369 [2024-07-12 06:39:37.042255] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042259] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490b70) on tqpair=0x2446d70 00:14:57.369 [2024-07-12 06:39:37.042291] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:57.369 [2024-07-12 06:39:37.042302] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.042314] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.042323] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042327] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042332] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2446d70) 00:14:57.369 [2024-07-12 06:39:37.042340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.369 [2024-07-12 06:39:37.042361] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490b70, cid 4, qid 0 00:14:57.369 [2024-07-12 06:39:37.042433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.369 [2024-07-12 06:39:37.042440] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.369 [2024-07-12 06:39:37.042445] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042449] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2446d70): datao=0, datal=4096, cccid=4 00:14:57.369 [2024-07-12 06:39:37.042454] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2490b70) on tqpair(0x2446d70): expected_datao=0, payload_size=4096 00:14:57.369 [2024-07-12 06:39:37.042463] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042468] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.369 [2024-07-12 06:39:37.042483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.369 [2024-07-12 06:39:37.042488] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042492] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490b70) on tqpair=0x2446d70 00:14:57.369 [2024-07-12 06:39:37.042509] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.042521] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.042530] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042535] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042539] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2446d70) 00:14:57.369 [2024-07-12 06:39:37.042547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.369 [2024-07-12 06:39:37.042567] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490b70, cid 4, qid 0 00:14:57.369 [2024-07-12 06:39:37.042646] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.369 [2024-07-12 06:39:37.042658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.369 [2024-07-12 06:39:37.042662] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042667] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2446d70): datao=0, datal=4096, cccid=4 00:14:57.369 [2024-07-12 06:39:37.042672] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2490b70) on tqpair(0x2446d70): expected_datao=0, payload_size=4096 00:14:57.369 [2024-07-12 06:39:37.042681] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042686] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042695] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.369 [2024-07-12 06:39:37.042702] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.369 [2024-07-12 06:39:37.042706] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.369 [2024-07-12 06:39:37.042710] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490b70) on tqpair=0x2446d70 00:14:57.369 [2024-07-12 06:39:37.042720] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.042730] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.042742] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.042749] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.042755] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:57.369 [2024-07-12 06:39:37.042761] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:57.370 [2024-07-12 06:39:37.042767] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:57.370 [2024-07-12 06:39:37.042773] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:57.370 [2024-07-12 06:39:37.042789] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.042794] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.042799] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2446d70) 00:14:57.370 [2024-07-12 06:39:37.042807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.370 [2024-07-12 06:39:37.042815] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.042820] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.042824] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2446d70) 00:14:57.370 [2024-07-12 06:39:37.042831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.370 [2024-07-12 06:39:37.042861] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490b70, cid 4, qid 0 00:14:57.370 [2024-07-12 06:39:37.042869] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490cd0, cid 5, qid 0 00:14:57.370 [2024-07-12 06:39:37.042951] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.370 [2024-07-12 06:39:37.042959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.370 [2024-07-12 06:39:37.042963] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.042967] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490b70) on tqpair=0x2446d70 00:14:57.370 [2024-07-12 06:39:37.042989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.370 [2024-07-12 06:39:37.042995] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.370 [2024-07-12 06:39:37.043000] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043004] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490cd0) on tqpair=0x2446d70 00:14:57.370 [2024-07-12 06:39:37.043016] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043021] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043025] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2446d70) 00:14:57.370 [2024-07-12 06:39:37.043033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.370 [2024-07-12 06:39:37.043053] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490cd0, cid 5, qid 0 00:14:57.370 [2024-07-12 06:39:37.043119] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.370 [2024-07-12 06:39:37.043127] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.370 [2024-07-12 06:39:37.043131] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043136] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490cd0) on tqpair=0x2446d70 00:14:57.370 [2024-07-12 06:39:37.043148] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043153] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043157] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2446d70) 00:14:57.370 [2024-07-12 06:39:37.043165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.370 [2024-07-12 06:39:37.043183] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490cd0, cid 5, qid 0 00:14:57.370 [2024-07-12 06:39:37.043237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.370 [2024-07-12 06:39:37.043244] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.370 [2024-07-12 06:39:37.043248] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043253] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490cd0) on tqpair=0x2446d70 00:14:57.370 [2024-07-12 06:39:37.043265] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043270] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2446d70) 00:14:57.370 [2024-07-12 06:39:37.043282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.370 [2024-07-12 06:39:37.043299] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490cd0, cid 5, qid 0 00:14:57.370 [2024-07-12 06:39:37.043352] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.370 [2024-07-12 06:39:37.043359] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.370 [2024-07-12 06:39:37.043364] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043368] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490cd0) on tqpair=0x2446d70 00:14:57.370 [2024-07-12 06:39:37.043384] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043389] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043393] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2446d70) 00:14:57.370 [2024-07-12 06:39:37.043401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.370 [2024-07-12 06:39:37.043409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043414] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043418] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2446d70) 00:14:57.370 [2024-07-12 06:39:37.043425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.370 [2024-07-12 06:39:37.043433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043438] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043442] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2446d70) 00:14:57.370 [2024-07-12 06:39:37.043449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.370 [2024-07-12 06:39:37.043458] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043462] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043467] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2446d70) 00:14:57.370 [2024-07-12 06:39:37.043474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.370 [2024-07-12 06:39:37.043494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490cd0, cid 5, qid 0 00:14:57.370 [2024-07-12 06:39:37.043502] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490b70, cid 4, qid 0 00:14:57.370 [2024-07-12 06:39:37.043508] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490e30, cid 6, qid 0 00:14:57.370 [2024-07-12 06:39:37.043513] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490f90, cid 7, qid 0 00:14:57.370 [2024-07-12 06:39:37.043650] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.370 [2024-07-12 06:39:37.043658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.370 [2024-07-12 06:39:37.043662] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043667] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2446d70): datao=0, datal=8192, cccid=5 00:14:57.370 [2024-07-12 06:39:37.043672] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2490cd0) on tqpair(0x2446d70): expected_datao=0, payload_size=8192 00:14:57.370 [2024-07-12 06:39:37.043692] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043698] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043705] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.370 [2024-07-12 06:39:37.043711] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.370 [2024-07-12 06:39:37.043715] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043720] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2446d70): datao=0, datal=512, cccid=4 00:14:57.370 [2024-07-12 06:39:37.043725] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2490b70) on tqpair(0x2446d70): expected_datao=0, payload_size=512 00:14:57.370 [2024-07-12 06:39:37.043733] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043737] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043743] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.370 [2024-07-12 06:39:37.043750] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.370 [2024-07-12 06:39:37.043754] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043758] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2446d70): datao=0, datal=512, cccid=6 00:14:57.370 [2024-07-12 06:39:37.043763] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2490e30) on tqpair(0x2446d70): expected_datao=0, payload_size=512 00:14:57.370 [2024-07-12 06:39:37.043771] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043776] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043782] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:57.370 [2024-07-12 06:39:37.043788] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:57.370 [2024-07-12 06:39:37.043793] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043797] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2446d70): datao=0, datal=4096, cccid=7 00:14:57.370 [2024-07-12 06:39:37.043802] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2490f90) on tqpair(0x2446d70): expected_datao=0, payload_size=4096 00:14:57.370 [2024-07-12 06:39:37.043810] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043814] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.370 [2024-07-12 06:39:37.043829] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.370 [2024-07-12 06:39:37.043834] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.370 [2024-07-12 06:39:37.043838] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490cd0) on tqpair=0x2446d70 00:14:57.370 [2024-07-12 06:39:37.043856] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.370 [2024-07-12 06:39:37.043864] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.370 ===================================================== 00:14:57.370 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.370 ===================================================== 00:14:57.370 Controller Capabilities/Features 00:14:57.370 ================================ 00:14:57.370 Vendor ID: 8086 00:14:57.370 Subsystem Vendor ID: 8086 00:14:57.370 Serial Number: SPDK00000000000001 00:14:57.370 Model Number: SPDK bdev Controller 00:14:57.370 Firmware Version: 24.01.1 00:14:57.370 Recommended Arb Burst: 6 00:14:57.370 IEEE OUI Identifier: e4 d2 5c 00:14:57.370 Multi-path I/O 00:14:57.370 May have multiple subsystem ports: Yes 00:14:57.370 May have multiple controllers: Yes 00:14:57.370 Associated with SR-IOV VF: No 00:14:57.370 Max Data Transfer Size: 131072 00:14:57.370 Max Number of Namespaces: 32 00:14:57.370 Max Number of I/O Queues: 127 00:14:57.370 NVMe Specification Version (VS): 1.3 00:14:57.371 NVMe Specification Version (Identify): 1.3 00:14:57.371 Maximum Queue Entries: 128 00:14:57.371 Contiguous Queues Required: Yes 00:14:57.371 Arbitration Mechanisms Supported 00:14:57.371 Weighted Round Robin: Not Supported 00:14:57.371 Vendor Specific: Not Supported 00:14:57.371 Reset Timeout: 15000 ms 00:14:57.371 Doorbell Stride: 4 bytes 00:14:57.371 NVM Subsystem Reset: Not Supported 00:14:57.371 Command Sets Supported 00:14:57.371 NVM Command Set: Supported 00:14:57.371 Boot Partition: Not Supported 00:14:57.371 Memory Page Size Minimum: 4096 bytes 00:14:57.371 Memory Page Size Maximum: 4096 bytes 00:14:57.371 Persistent Memory Region: Not Supported 00:14:57.371 Optional Asynchronous Events Supported 00:14:57.371 Namespace Attribute Notices: Supported 00:14:57.371 Firmware Activation Notices: Not Supported 00:14:57.371 ANA Change Notices: Not Supported 00:14:57.371 PLE Aggregate Log Change Notices: Not Supported 00:14:57.371 LBA Status Info Alert Notices: Not Supported 00:14:57.371 EGE Aggregate Log Change Notices: Not Supported 00:14:57.371 Normal NVM Subsystem Shutdown event: Not Supported 00:14:57.371 Zone Descriptor Change Notices: Not Supported 00:14:57.371 Discovery Log Change Notices: Not Supported 00:14:57.371 Controller Attributes 00:14:57.371 128-bit Host Identifier: Supported 00:14:57.371 Non-Operational Permissive Mode: Not Supported 00:14:57.371 NVM Sets: Not Supported 00:14:57.371 Read Recovery Levels: Not Supported 00:14:57.371 Endurance Groups: Not Supported 00:14:57.371 Predictable Latency Mode: Not Supported 00:14:57.371 Traffic Based Keep ALive: Not Supported 00:14:57.371 Namespace Granularity: Not Supported 00:14:57.371 SQ Associations: Not Supported 00:14:57.371 UUID List: Not Supported 00:14:57.371 Multi-Domain Subsystem: Not Supported 00:14:57.371 Fixed Capacity Management: Not Supported 00:14:57.371 Variable Capacity Management: Not Supported 00:14:57.371 Delete Endurance Group: Not Supported 00:14:57.371 Delete NVM Set: Not Supported 00:14:57.371 Extended LBA Formats Supported: Not Supported 00:14:57.371 Flexible Data Placement Supported: Not Supported 00:14:57.371 00:14:57.371 Controller Memory Buffer Support 00:14:57.371 ================================ 00:14:57.371 Supported: No 00:14:57.371 00:14:57.371 Persistent Memory Region Support 00:14:57.371 ================================ 00:14:57.371 Supported: No 00:14:57.371 00:14:57.371 Admin Command Set Attributes 00:14:57.371 ============================ 00:14:57.371 Security Send/Receive: Not Supported 00:14:57.371 Format NVM: Not Supported 00:14:57.371 Firmware Activate/Download: Not Supported 00:14:57.371 Namespace Management: Not Supported 00:14:57.371 Device Self-Test: Not Supported 00:14:57.371 Directives: Not Supported 00:14:57.371 NVMe-MI: Not Supported 00:14:57.371 Virtualization Management: Not Supported 00:14:57.371 Doorbell Buffer Config: Not Supported 00:14:57.371 Get LBA Status Capability: Not Supported 00:14:57.371 Command & Feature Lockdown Capability: Not Supported 00:14:57.371 Abort Command Limit: 4 00:14:57.371 Async Event Request Limit: 4 00:14:57.371 Number of Firmware Slots: N/A 00:14:57.371 Firmware Slot 1 Read-Only: N/A 00:14:57.371 Firmware Activation Without Reset: [2024-07-12 06:39:37.043868] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.371 [2024-07-12 06:39:37.043872] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490b70) on tqpair=0x2446d70 00:14:57.371 [2024-07-12 06:39:37.043884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.371 [2024-07-12 06:39:37.043891] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.371 [2024-07-12 06:39:37.043911] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.371 [2024-07-12 06:39:37.043915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490e30) on tqpair=0x2446d70 00:14:57.371 [2024-07-12 06:39:37.043924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.371 [2024-07-12 06:39:37.043930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.371 [2024-07-12 06:39:37.043934] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.371 [2024-07-12 06:39:37.043938] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490f90) on tqpair=0x2446d70 00:14:57.371 N/A 00:14:57.371 Multiple Update Detection Support: N/A 00:14:57.371 Firmware Update Granularity: No Information Provided 00:14:57.371 Per-Namespace SMART Log: No 00:14:57.371 Asymmetric Namespace Access Log Page: Not Supported 00:14:57.371 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:57.371 Command Effects Log Page: Supported 00:14:57.371 Get Log Page Extended Data: Supported 00:14:57.371 Telemetry Log Pages: Not Supported 00:14:57.371 Persistent Event Log Pages: Not Supported 00:14:57.371 Supported Log Pages Log Page: May Support 00:14:57.371 Commands Supported & Effects Log Page: Not Supported 00:14:57.371 Feature Identifiers & Effects Log Page:May Support 00:14:57.371 NVMe-MI Commands & Effects Log Page: May Support 00:14:57.371 Data Area 4 for Telemetry Log: Not Supported 00:14:57.371 Error Log Page Entries Supported: 128 00:14:57.371 Keep Alive: Supported 00:14:57.371 Keep Alive Granularity: 10000 ms 00:14:57.371 00:14:57.371 NVM Command Set Attributes 00:14:57.371 ========================== 00:14:57.371 Submission Queue Entry Size 00:14:57.371 Max: 64 00:14:57.371 Min: 64 00:14:57.371 Completion Queue Entry Size 00:14:57.371 Max: 16 00:14:57.371 Min: 16 00:14:57.371 Number of Namespaces: 32 00:14:57.371 Compare Command: Supported 00:14:57.371 Write Uncorrectable Command: Not Supported 00:14:57.371 Dataset Management Command: Supported 00:14:57.371 Write Zeroes Command: Supported 00:14:57.371 Set Features Save Field: Not Supported 00:14:57.371 Reservations: Supported 00:14:57.371 Timestamp: Not Supported 00:14:57.371 Copy: Supported 00:14:57.371 Volatile Write Cache: Present 00:14:57.371 Atomic Write Unit (Normal): 1 00:14:57.371 Atomic Write Unit (PFail): 1 00:14:57.371 Atomic Compare & Write Unit: 1 00:14:57.371 Fused Compare & Write: Supported 00:14:57.371 Scatter-Gather List 00:14:57.371 SGL Command Set: Supported 00:14:57.371 SGL Keyed: Supported 00:14:57.371 SGL Bit Bucket Descriptor: Not Supported 00:14:57.371 SGL Metadata Pointer: Not Supported 00:14:57.371 Oversized SGL: Not Supported 00:14:57.371 SGL Metadata Address: Not Supported 00:14:57.371 SGL Offset: Supported 00:14:57.371 Transport SGL Data Block: Not Supported 00:14:57.371 Replay Protected Memory Block: Not Supported 00:14:57.371 00:14:57.371 Firmware Slot Information 00:14:57.371 ========================= 00:14:57.371 Active slot: 1 00:14:57.371 Slot 1 Firmware Revision: 24.01.1 00:14:57.371 00:14:57.371 00:14:57.371 Commands Supported and Effects 00:14:57.371 ============================== 00:14:57.371 Admin Commands 00:14:57.371 -------------- 00:14:57.371 Get Log Page (02h): Supported 00:14:57.371 Identify (06h): Supported 00:14:57.371 Abort (08h): Supported 00:14:57.371 Set Features (09h): Supported 00:14:57.371 Get Features (0Ah): Supported 00:14:57.371 Asynchronous Event Request (0Ch): Supported 00:14:57.371 Keep Alive (18h): Supported 00:14:57.371 I/O Commands 00:14:57.371 ------------ 00:14:57.371 Flush (00h): Supported LBA-Change 00:14:57.371 Write (01h): Supported LBA-Change 00:14:57.371 Read (02h): Supported 00:14:57.371 Compare (05h): Supported 00:14:57.371 Write Zeroes (08h): Supported LBA-Change 00:14:57.371 Dataset Management (09h): Supported LBA-Change 00:14:57.371 Copy (19h): Supported LBA-Change 00:14:57.371 Unknown (79h): Supported LBA-Change 00:14:57.371 Unknown (7Ah): Supported 00:14:57.371 00:14:57.371 Error Log 00:14:57.371 ========= 00:14:57.371 00:14:57.371 Arbitration 00:14:57.371 =========== 00:14:57.371 Arbitration Burst: 1 00:14:57.371 00:14:57.371 Power Management 00:14:57.371 ================ 00:14:57.371 Number of Power States: 1 00:14:57.371 Current Power State: Power State #0 00:14:57.371 Power State #0: 00:14:57.371 Max Power: 0.00 W 00:14:57.371 Non-Operational State: Operational 00:14:57.371 Entry Latency: Not Reported 00:14:57.371 Exit Latency: Not Reported 00:14:57.371 Relative Read Throughput: 0 00:14:57.371 Relative Read Latency: 0 00:14:57.371 Relative Write Throughput: 0 00:14:57.371 Relative Write Latency: 0 00:14:57.371 Idle Power: Not Reported 00:14:57.371 Active Power: Not Reported 00:14:57.371 Non-Operational Permissive Mode: Not Supported 00:14:57.371 00:14:57.371 Health Information 00:14:57.371 ================== 00:14:57.371 Critical Warnings: 00:14:57.371 Available Spare Space: OK 00:14:57.371 Temperature: OK 00:14:57.371 Device Reliability: OK 00:14:57.371 Read Only: No 00:14:57.371 Volatile Memory Backup: OK 00:14:57.371 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:57.371 Temperature Threshold: [2024-07-12 06:39:37.048107] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.371 [2024-07-12 06:39:37.048118] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.371 [2024-07-12 06:39:37.048123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2446d70) 00:14:57.371 [2024-07-12 06:39:37.048133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.371 [2024-07-12 06:39:37.048162] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490f90, cid 7, qid 0 00:14:57.371 [2024-07-12 06:39:37.048235] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.371 [2024-07-12 06:39:37.048243] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.371 [2024-07-12 06:39:37.048248] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048252] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490f90) on tqpair=0x2446d70 00:14:57.372 [2024-07-12 06:39:37.048303] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:57.372 [2024-07-12 06:39:37.048318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.372 [2024-07-12 06:39:37.048326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.372 [2024-07-12 06:39:37.048333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.372 [2024-07-12 06:39:37.048340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.372 [2024-07-12 06:39:37.048350] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048355] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048359] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.372 [2024-07-12 06:39:37.048368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.372 [2024-07-12 06:39:37.048392] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.372 [2024-07-12 06:39:37.048443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.372 [2024-07-12 06:39:37.048451] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.372 [2024-07-12 06:39:37.048455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.372 [2024-07-12 06:39:37.048470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048478] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.372 [2024-07-12 06:39:37.048487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.372 [2024-07-12 06:39:37.048509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.372 [2024-07-12 06:39:37.048591] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.372 [2024-07-12 06:39:37.048598] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.372 [2024-07-12 06:39:37.048602] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.372 [2024-07-12 06:39:37.048613] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:57.372 [2024-07-12 06:39:37.048618] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:57.372 [2024-07-12 06:39:37.048629] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048634] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048639] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.372 [2024-07-12 06:39:37.048646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.372 [2024-07-12 06:39:37.048664] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.372 [2024-07-12 06:39:37.048735] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.372 [2024-07-12 06:39:37.048742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.372 [2024-07-12 06:39:37.048746] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.372 [2024-07-12 06:39:37.048763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048772] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.372 [2024-07-12 06:39:37.048780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.372 [2024-07-12 06:39:37.048798] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.372 [2024-07-12 06:39:37.048851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.372 [2024-07-12 06:39:37.048858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.372 [2024-07-12 06:39:37.048863] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048867] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.372 [2024-07-12 06:39:37.048879] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048884] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048888] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.372 [2024-07-12 06:39:37.048896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.372 [2024-07-12 06:39:37.048914] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.372 [2024-07-12 06:39:37.048961] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.372 [2024-07-12 06:39:37.048968] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.372 [2024-07-12 06:39:37.048972] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048976] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.372 [2024-07-12 06:39:37.048988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048993] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.048997] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.372 [2024-07-12 06:39:37.049019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.372 [2024-07-12 06:39:37.049041] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.372 [2024-07-12 06:39:37.049096] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.372 [2024-07-12 06:39:37.049103] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.372 [2024-07-12 06:39:37.049108] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.049112] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.372 [2024-07-12 06:39:37.049124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.049129] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.049134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.372 [2024-07-12 06:39:37.049142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.372 [2024-07-12 06:39:37.049160] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.372 [2024-07-12 06:39:37.049207] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.372 [2024-07-12 06:39:37.049215] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.372 [2024-07-12 06:39:37.049219] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.049223] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.372 [2024-07-12 06:39:37.049235] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.049240] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.049245] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.372 [2024-07-12 06:39:37.049252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.372 [2024-07-12 06:39:37.049270] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.372 [2024-07-12 06:39:37.049315] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.372 [2024-07-12 06:39:37.049322] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.372 [2024-07-12 06:39:37.049326] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.372 [2024-07-12 06:39:37.049330] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.049342] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049347] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049351] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.049359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.049377] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.049428] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.049435] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.049440] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049444] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.049456] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049461] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049465] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.049473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.049491] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.049541] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.049549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.049553] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.049569] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049574] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049578] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.049586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.049604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.049670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.049679] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.049684] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049688] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.049703] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049709] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049713] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.049722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.049744] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.049795] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.049803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.049807] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049811] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.049824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049829] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049833] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.049841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.049859] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.049909] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.049916] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.049921] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049925] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.049937] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049942] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.049947] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.049975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.050000] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.050057] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.050065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.050069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050074] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.050086] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050091] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050095] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.050104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.050122] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.050175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.050182] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.050187] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050191] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.050203] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050208] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050212] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.050220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.050238] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.050292] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.050299] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.050304] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050308] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.050320] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050325] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050329] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.050337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.050355] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.050405] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.050417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.050422] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050427] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.050439] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050444] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050449] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.050457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.050476] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.050538] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.050545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.050549] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050554] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.050565] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050570] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050574] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.050582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.050629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.050683] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.050692] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.050696] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050700] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.050713] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.050730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.050751] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.050798] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.373 [2024-07-12 06:39:37.050805] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.373 [2024-07-12 06:39:37.050809] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050814] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.373 [2024-07-12 06:39:37.050826] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050831] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.373 [2024-07-12 06:39:37.050835] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.373 [2024-07-12 06:39:37.050843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.373 [2024-07-12 06:39:37.050861] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.373 [2024-07-12 06:39:37.050917] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.374 [2024-07-12 06:39:37.050924] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.374 [2024-07-12 06:39:37.050942] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.050947] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.374 [2024-07-12 06:39:37.050958] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.050963] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.050978] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.374 [2024-07-12 06:39:37.050987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.374 [2024-07-12 06:39:37.051008] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.374 [2024-07-12 06:39:37.051066] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.374 [2024-07-12 06:39:37.051073] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.374 [2024-07-12 06:39:37.051077] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051082] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.374 [2024-07-12 06:39:37.051093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051099] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051103] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.374 [2024-07-12 06:39:37.051110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.374 [2024-07-12 06:39:37.051128] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.374 [2024-07-12 06:39:37.051179] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.374 [2024-07-12 06:39:37.051186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.374 [2024-07-12 06:39:37.051190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051194] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.374 [2024-07-12 06:39:37.051206] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051211] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051215] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.374 [2024-07-12 06:39:37.051223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.374 [2024-07-12 06:39:37.051240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.374 [2024-07-12 06:39:37.051292] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.374 [2024-07-12 06:39:37.051299] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.374 [2024-07-12 06:39:37.051303] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051308] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.374 [2024-07-12 06:39:37.051319] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051324] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051328] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.374 [2024-07-12 06:39:37.051336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.374 [2024-07-12 06:39:37.051353] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.374 [2024-07-12 06:39:37.051401] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.374 [2024-07-12 06:39:37.051408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.374 [2024-07-12 06:39:37.051412] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051417] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.374 [2024-07-12 06:39:37.051428] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051433] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051437] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.374 [2024-07-12 06:39:37.051445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.374 [2024-07-12 06:39:37.051479] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.374 [2024-07-12 06:39:37.051529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.374 [2024-07-12 06:39:37.051536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.374 [2024-07-12 06:39:37.051540] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051545] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.374 [2024-07-12 06:39:37.051557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051562] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051566] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.374 [2024-07-12 06:39:37.051574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.374 [2024-07-12 06:39:37.051592] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.374 [2024-07-12 06:39:37.051645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.374 [2024-07-12 06:39:37.051652] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.374 [2024-07-12 06:39:37.051657] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.374 [2024-07-12 06:39:37.051673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051678] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051682] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.374 [2024-07-12 06:39:37.051690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.374 [2024-07-12 06:39:37.051708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.374 [2024-07-12 06:39:37.051758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.374 [2024-07-12 06:39:37.051765] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.374 [2024-07-12 06:39:37.051769] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051774] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.374 [2024-07-12 06:39:37.051786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.374 [2024-07-12 06:39:37.051803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.374 [2024-07-12 06:39:37.051821] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.374 [2024-07-12 06:39:37.051871] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.374 [2024-07-12 06:39:37.051878] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.374 [2024-07-12 06:39:37.051883] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051887] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.374 [2024-07-12 06:39:37.051899] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051905] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.051909] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.374 [2024-07-12 06:39:37.051917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.374 [2024-07-12 06:39:37.051935] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.374 [2024-07-12 06:39:37.055971] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.374 [2024-07-12 06:39:37.055992] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.374 [2024-07-12 06:39:37.056015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.056020] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.374 [2024-07-12 06:39:37.056036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.056042] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.056047] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2446d70) 00:14:57.374 [2024-07-12 06:39:37.056056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.374 [2024-07-12 06:39:37.056084] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2490a10, cid 3, qid 0 00:14:57.374 [2024-07-12 06:39:37.056154] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:57.374 [2024-07-12 06:39:37.056161] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:57.374 [2024-07-12 06:39:37.056165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:57.374 [2024-07-12 06:39:37.056169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2490a10) on tqpair=0x2446d70 00:14:57.374 [2024-07-12 06:39:37.056179] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:14:57.374 0 Kelvin (-273 Celsius) 00:14:57.374 Available Spare: 0% 00:14:57.374 Available Spare Threshold: 0% 00:14:57.374 Life Percentage Used: 0% 00:14:57.374 Data Units Read: 0 00:14:57.374 Data Units Written: 0 00:14:57.374 Host Read Commands: 0 00:14:57.374 Host Write Commands: 0 00:14:57.374 Controller Busy Time: 0 minutes 00:14:57.374 Power Cycles: 0 00:14:57.374 Power On Hours: 0 hours 00:14:57.374 Unsafe Shutdowns: 0 00:14:57.374 Unrecoverable Media Errors: 0 00:14:57.374 Lifetime Error Log Entries: 0 00:14:57.374 Warning Temperature Time: 0 minutes 00:14:57.374 Critical Temperature Time: 0 minutes 00:14:57.374 00:14:57.374 Number of Queues 00:14:57.374 ================ 00:14:57.374 Number of I/O Submission Queues: 127 00:14:57.374 Number of I/O Completion Queues: 127 00:14:57.374 00:14:57.374 Active Namespaces 00:14:57.374 ================= 00:14:57.374 Namespace ID:1 00:14:57.374 Error Recovery Timeout: Unlimited 00:14:57.374 Command Set Identifier: NVM (00h) 00:14:57.374 Deallocate: Supported 00:14:57.374 Deallocated/Unwritten Error: Not Supported 00:14:57.374 Deallocated Read Value: Unknown 00:14:57.374 Deallocate in Write Zeroes: Not Supported 00:14:57.374 Deallocated Guard Field: 0xFFFF 00:14:57.374 Flush: Supported 00:14:57.374 Reservation: Supported 00:14:57.374 Namespace Sharing Capabilities: Multiple Controllers 00:14:57.374 Size (in LBAs): 131072 (0GiB) 00:14:57.374 Capacity (in LBAs): 131072 (0GiB) 00:14:57.374 Utilization (in LBAs): 131072 (0GiB) 00:14:57.374 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:57.374 EUI64: ABCDEF0123456789 00:14:57.374 UUID: d4c7e807-7e2d-4320-91a6-31fc8e8a1e6d 00:14:57.375 Thin Provisioning: Not Supported 00:14:57.375 Per-NS Atomic Units: Yes 00:14:57.375 Atomic Boundary Size (Normal): 0 00:14:57.375 Atomic Boundary Size (PFail): 0 00:14:57.375 Atomic Boundary Offset: 0 00:14:57.375 Maximum Single Source Range Length: 65535 00:14:57.375 Maximum Copy Length: 65535 00:14:57.375 Maximum Source Range Count: 1 00:14:57.375 NGUID/EUI64 Never Reused: No 00:14:57.375 Namespace Write Protected: No 00:14:57.375 Number of LBA Formats: 1 00:14:57.375 Current LBA Format: LBA Format #00 00:14:57.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:57.375 00:14:57.375 06:39:37 -- host/identify.sh@51 -- # sync 00:14:57.375 06:39:37 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.375 06:39:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.375 06:39:37 -- common/autotest_common.sh@10 -- # set +x 00:14:57.375 06:39:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.375 06:39:37 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:57.375 06:39:37 -- host/identify.sh@56 -- # nvmftestfini 00:14:57.375 06:39:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:57.375 06:39:37 -- nvmf/common.sh@116 -- # sync 00:14:57.375 06:39:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:57.375 06:39:37 -- nvmf/common.sh@119 -- # set +e 00:14:57.375 06:39:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:57.375 06:39:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:57.375 rmmod nvme_tcp 00:14:57.375 rmmod nvme_fabrics 00:14:57.375 rmmod nvme_keyring 00:14:57.375 06:39:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:57.375 06:39:37 -- nvmf/common.sh@123 -- # set -e 00:14:57.375 06:39:37 -- nvmf/common.sh@124 -- # return 0 00:14:57.375 06:39:37 -- nvmf/common.sh@477 -- # '[' -n 80052 ']' 00:14:57.375 06:39:37 -- nvmf/common.sh@478 -- # killprocess 80052 00:14:57.375 06:39:37 -- common/autotest_common.sh@926 -- # '[' -z 80052 ']' 00:14:57.375 06:39:37 -- common/autotest_common.sh@930 -- # kill -0 80052 00:14:57.375 06:39:37 -- common/autotest_common.sh@931 -- # uname 00:14:57.375 06:39:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:57.375 06:39:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80052 00:14:57.375 killing process with pid 80052 00:14:57.375 06:39:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:57.375 06:39:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:57.375 06:39:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80052' 00:14:57.375 06:39:37 -- common/autotest_common.sh@945 -- # kill 80052 00:14:57.375 [2024-07-12 06:39:37.229135] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:57.375 06:39:37 -- common/autotest_common.sh@950 -- # wait 80052 00:14:57.634 06:39:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:57.634 06:39:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:57.634 06:39:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:57.634 06:39:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.634 06:39:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:57.634 06:39:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.634 06:39:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.634 06:39:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.634 06:39:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:57.634 00:14:57.634 real 0m2.388s 00:14:57.634 user 0m6.976s 00:14:57.634 sys 0m0.570s 00:14:57.634 06:39:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.634 06:39:37 -- common/autotest_common.sh@10 -- # set +x 00:14:57.634 ************************************ 00:14:57.634 END TEST nvmf_identify 00:14:57.634 ************************************ 00:14:57.634 06:39:37 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:57.634 06:39:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:57.634 06:39:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:57.634 06:39:37 -- common/autotest_common.sh@10 -- # set +x 00:14:57.634 ************************************ 00:14:57.634 START TEST nvmf_perf 00:14:57.634 ************************************ 00:14:57.634 06:39:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:57.634 * Looking for test storage... 00:14:57.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:57.893 06:39:37 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:57.893 06:39:37 -- nvmf/common.sh@7 -- # uname -s 00:14:57.893 06:39:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.893 06:39:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.893 06:39:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.893 06:39:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.893 06:39:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.893 06:39:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.893 06:39:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.893 06:39:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.893 06:39:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.893 06:39:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.893 06:39:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:14:57.893 06:39:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:14:57.893 06:39:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.893 06:39:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.893 06:39:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:57.893 06:39:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.893 06:39:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.893 06:39:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.893 06:39:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.893 06:39:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.893 06:39:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.893 06:39:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.893 06:39:37 -- paths/export.sh@5 -- # export PATH 00:14:57.893 06:39:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.893 06:39:37 -- nvmf/common.sh@46 -- # : 0 00:14:57.893 06:39:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:57.893 06:39:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:57.893 06:39:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:57.893 06:39:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.893 06:39:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.893 06:39:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:57.893 06:39:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:57.893 06:39:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:57.893 06:39:37 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:57.893 06:39:37 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:57.893 06:39:37 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.893 06:39:37 -- host/perf.sh@17 -- # nvmftestinit 00:14:57.893 06:39:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:57.893 06:39:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.893 06:39:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:57.893 06:39:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:57.893 06:39:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:57.893 06:39:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.893 06:39:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.893 06:39:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.893 06:39:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:57.893 06:39:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:57.893 06:39:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:57.893 06:39:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:57.893 06:39:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:57.893 06:39:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:57.893 06:39:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.893 06:39:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.893 06:39:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:57.893 06:39:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:57.893 06:39:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:57.893 06:39:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:57.893 06:39:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:57.893 06:39:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.893 06:39:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:57.893 06:39:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:57.893 06:39:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:57.893 06:39:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:57.893 06:39:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:57.893 06:39:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:57.893 Cannot find device "nvmf_tgt_br" 00:14:57.893 06:39:37 -- nvmf/common.sh@154 -- # true 00:14:57.893 06:39:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:57.893 Cannot find device "nvmf_tgt_br2" 00:14:57.893 06:39:37 -- nvmf/common.sh@155 -- # true 00:14:57.893 06:39:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:57.893 06:39:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:57.893 Cannot find device "nvmf_tgt_br" 00:14:57.893 06:39:37 -- nvmf/common.sh@157 -- # true 00:14:57.893 06:39:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:57.893 Cannot find device "nvmf_tgt_br2" 00:14:57.893 06:39:37 -- nvmf/common.sh@158 -- # true 00:14:57.893 06:39:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:57.893 06:39:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:57.893 06:39:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:57.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.893 06:39:37 -- nvmf/common.sh@161 -- # true 00:14:57.893 06:39:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:57.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.893 06:39:37 -- nvmf/common.sh@162 -- # true 00:14:57.893 06:39:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:57.893 06:39:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:57.893 06:39:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:57.893 06:39:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:57.893 06:39:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:57.893 06:39:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:57.893 06:39:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:57.893 06:39:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:57.893 06:39:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:57.894 06:39:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:58.151 06:39:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:58.151 06:39:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:58.151 06:39:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:58.151 06:39:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:58.151 06:39:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:58.151 06:39:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:58.151 06:39:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:58.151 06:39:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:58.151 06:39:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:58.151 06:39:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:58.151 06:39:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:58.151 06:39:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:58.151 06:39:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:58.151 06:39:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:58.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:14:58.151 00:14:58.151 --- 10.0.0.2 ping statistics --- 00:14:58.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.151 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:58.151 06:39:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:58.151 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:58.151 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:58.151 00:14:58.151 --- 10.0.0.3 ping statistics --- 00:14:58.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.151 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:58.151 06:39:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:58.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:58.151 00:14:58.151 --- 10.0.0.1 ping statistics --- 00:14:58.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.152 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:58.152 06:39:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.152 06:39:37 -- nvmf/common.sh@421 -- # return 0 00:14:58.152 06:39:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:58.152 06:39:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.152 06:39:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:58.152 06:39:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:58.152 06:39:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.152 06:39:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:58.152 06:39:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:58.152 06:39:37 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:58.152 06:39:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:58.152 06:39:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:58.152 06:39:37 -- common/autotest_common.sh@10 -- # set +x 00:14:58.152 06:39:37 -- nvmf/common.sh@469 -- # nvmfpid=80254 00:14:58.152 06:39:37 -- nvmf/common.sh@470 -- # waitforlisten 80254 00:14:58.152 06:39:37 -- common/autotest_common.sh@819 -- # '[' -z 80254 ']' 00:14:58.152 06:39:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.152 06:39:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.152 06:39:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:58.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.152 06:39:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.152 06:39:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:58.152 06:39:37 -- common/autotest_common.sh@10 -- # set +x 00:14:58.152 [2024-07-12 06:39:38.006618] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:58.152 [2024-07-12 06:39:38.006706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.410 [2024-07-12 06:39:38.149897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.410 [2024-07-12 06:39:38.189877] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:58.410 [2024-07-12 06:39:38.190300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.410 [2024-07-12 06:39:38.190447] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.410 [2024-07-12 06:39:38.190658] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.410 [2024-07-12 06:39:38.190813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.410 [2024-07-12 06:39:38.190873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.410 [2024-07-12 06:39:38.191114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.410 [2024-07-12 06:39:38.191821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.410 06:39:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.410 06:39:38 -- common/autotest_common.sh@852 -- # return 0 00:14:58.410 06:39:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.410 06:39:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:58.410 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:14:58.410 06:39:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.410 06:39:38 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:58.410 06:39:38 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:58.978 06:39:38 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:58.978 06:39:38 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:59.236 06:39:39 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:14:59.236 06:39:39 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:59.496 06:39:39 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:59.496 06:39:39 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:14:59.496 06:39:39 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:59.496 06:39:39 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:59.496 06:39:39 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:59.754 [2024-07-12 06:39:39.490274] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.754 06:39:39 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:00.013 06:39:39 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:00.013 06:39:39 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:00.272 06:39:39 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:00.272 06:39:39 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:00.531 06:39:40 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.531 [2024-07-12 06:39:40.391468] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.531 06:39:40 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:00.790 06:39:40 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:15:00.790 06:39:40 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:00.790 06:39:40 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:00.790 06:39:40 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:02.166 Initializing NVMe Controllers 00:15:02.166 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:15:02.166 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:15:02.166 Initialization complete. Launching workers. 00:15:02.166 ======================================================== 00:15:02.166 Latency(us) 00:15:02.166 Device Information : IOPS MiB/s Average min max 00:15:02.166 PCIE (0000:00:06.0) NSID 1 from core 0: 22720.00 88.75 1407.71 371.55 8253.44 00:15:02.166 ======================================================== 00:15:02.166 Total : 22720.00 88.75 1407.71 371.55 8253.44 00:15:02.166 00:15:02.166 06:39:41 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:03.542 Initializing NVMe Controllers 00:15:03.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:03.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:03.542 Initialization complete. Launching workers. 00:15:03.542 ======================================================== 00:15:03.542 Latency(us) 00:15:03.542 Device Information : IOPS MiB/s Average min max 00:15:03.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3617.99 14.13 274.94 106.78 7277.10 00:15:03.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 121.00 0.47 8304.87 6984.71 12044.07 00:15:03.542 ======================================================== 00:15:03.542 Total : 3738.99 14.61 534.81 106.78 12044.07 00:15:03.542 00:15:03.542 06:39:43 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:04.914 Initializing NVMe Controllers 00:15:04.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:04.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:04.914 Initialization complete. Launching workers. 00:15:04.914 ======================================================== 00:15:04.914 Latency(us) 00:15:04.914 Device Information : IOPS MiB/s Average min max 00:15:04.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8644.57 33.77 3703.45 531.30 7779.72 00:15:04.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3984.13 15.56 8033.05 5890.35 12422.66 00:15:04.914 ======================================================== 00:15:04.914 Total : 12628.70 49.33 5069.36 531.30 12422.66 00:15:04.914 00:15:04.914 06:39:44 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:04.914 06:39:44 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:07.441 Initializing NVMe Controllers 00:15:07.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:07.441 Controller IO queue size 128, less than required. 00:15:07.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:07.441 Controller IO queue size 128, less than required. 00:15:07.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:07.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:07.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:07.441 Initialization complete. Launching workers. 00:15:07.441 ======================================================== 00:15:07.441 Latency(us) 00:15:07.441 Device Information : IOPS MiB/s Average min max 00:15:07.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1693.03 423.26 76968.72 50312.02 177688.03 00:15:07.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 613.65 153.41 213230.79 112825.73 336991.28 00:15:07.441 ======================================================== 00:15:07.441 Total : 2306.68 576.67 113218.68 50312.02 336991.28 00:15:07.441 00:15:07.441 06:39:46 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:07.441 No valid NVMe controllers or AIO or URING devices found 00:15:07.441 Initializing NVMe Controllers 00:15:07.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:07.441 Controller IO queue size 128, less than required. 00:15:07.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:07.441 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:07.441 Controller IO queue size 128, less than required. 00:15:07.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:07.441 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:07.441 WARNING: Some requested NVMe devices were skipped 00:15:07.441 06:39:47 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:09.972 Initializing NVMe Controllers 00:15:09.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:09.972 Controller IO queue size 128, less than required. 00:15:09.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:09.972 Controller IO queue size 128, less than required. 00:15:09.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:09.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:09.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:09.972 Initialization complete. Launching workers. 00:15:09.972 00:15:09.972 ==================== 00:15:09.972 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:09.972 TCP transport: 00:15:09.972 polls: 6696 00:15:09.972 idle_polls: 0 00:15:09.972 sock_completions: 6696 00:15:09.972 nvme_completions: 6672 00:15:09.972 submitted_requests: 10083 00:15:09.972 queued_requests: 1 00:15:09.972 00:15:09.972 ==================== 00:15:09.972 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:09.972 TCP transport: 00:15:09.972 polls: 6742 00:15:09.972 idle_polls: 0 00:15:09.972 sock_completions: 6742 00:15:09.972 nvme_completions: 6472 00:15:09.972 submitted_requests: 9856 00:15:09.972 queued_requests: 1 00:15:09.972 ======================================================== 00:15:09.972 Latency(us) 00:15:09.972 Device Information : IOPS MiB/s Average min max 00:15:09.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1731.18 432.79 75265.56 44036.11 126102.94 00:15:09.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1681.19 420.30 76316.71 39148.75 145919.04 00:15:09.972 ======================================================== 00:15:09.972 Total : 3412.37 853.09 75783.43 39148.75 145919.04 00:15:09.972 00:15:09.972 06:39:49 -- host/perf.sh@66 -- # sync 00:15:09.972 06:39:49 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.230 06:39:49 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:15:10.230 06:39:49 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:15:10.230 06:39:49 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:15:10.488 06:39:50 -- host/perf.sh@72 -- # ls_guid=3923e365-061a-4710-b299-faf36f76271e 00:15:10.488 06:39:50 -- host/perf.sh@73 -- # get_lvs_free_mb 3923e365-061a-4710-b299-faf36f76271e 00:15:10.488 06:39:50 -- common/autotest_common.sh@1343 -- # local lvs_uuid=3923e365-061a-4710-b299-faf36f76271e 00:15:10.488 06:39:50 -- common/autotest_common.sh@1344 -- # local lvs_info 00:15:10.488 06:39:50 -- common/autotest_common.sh@1345 -- # local fc 00:15:10.488 06:39:50 -- common/autotest_common.sh@1346 -- # local cs 00:15:10.488 06:39:50 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:10.746 06:39:50 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:15:10.746 { 00:15:10.746 "uuid": "3923e365-061a-4710-b299-faf36f76271e", 00:15:10.746 "name": "lvs_0", 00:15:10.746 "base_bdev": "Nvme0n1", 00:15:10.746 "total_data_clusters": 1278, 00:15:10.746 "free_clusters": 1278, 00:15:10.746 "block_size": 4096, 00:15:10.746 "cluster_size": 4194304 00:15:10.746 } 00:15:10.746 ]' 00:15:10.746 06:39:50 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="3923e365-061a-4710-b299-faf36f76271e") .free_clusters' 00:15:10.746 06:39:50 -- common/autotest_common.sh@1348 -- # fc=1278 00:15:10.746 06:39:50 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="3923e365-061a-4710-b299-faf36f76271e") .cluster_size' 00:15:10.746 5112 00:15:10.746 06:39:50 -- common/autotest_common.sh@1349 -- # cs=4194304 00:15:10.746 06:39:50 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:15:10.746 06:39:50 -- common/autotest_common.sh@1353 -- # echo 5112 00:15:10.746 06:39:50 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:15:10.746 06:39:50 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3923e365-061a-4710-b299-faf36f76271e lbd_0 5112 00:15:11.003 06:39:50 -- host/perf.sh@80 -- # lb_guid=94be7ad7-606c-4d24-9e6f-b65cd05c3549 00:15:11.003 06:39:50 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 94be7ad7-606c-4d24-9e6f-b65cd05c3549 lvs_n_0 00:15:11.262 06:39:51 -- host/perf.sh@83 -- # ls_nested_guid=00432e41-6d8d-4596-afe4-ff6f3d603d30 00:15:11.262 06:39:51 -- host/perf.sh@84 -- # get_lvs_free_mb 00432e41-6d8d-4596-afe4-ff6f3d603d30 00:15:11.262 06:39:51 -- common/autotest_common.sh@1343 -- # local lvs_uuid=00432e41-6d8d-4596-afe4-ff6f3d603d30 00:15:11.262 06:39:51 -- common/autotest_common.sh@1344 -- # local lvs_info 00:15:11.262 06:39:51 -- common/autotest_common.sh@1345 -- # local fc 00:15:11.262 06:39:51 -- common/autotest_common.sh@1346 -- # local cs 00:15:11.520 06:39:51 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:11.520 06:39:51 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:15:11.520 { 00:15:11.520 "uuid": "3923e365-061a-4710-b299-faf36f76271e", 00:15:11.520 "name": "lvs_0", 00:15:11.520 "base_bdev": "Nvme0n1", 00:15:11.520 "total_data_clusters": 1278, 00:15:11.520 "free_clusters": 0, 00:15:11.520 "block_size": 4096, 00:15:11.520 "cluster_size": 4194304 00:15:11.520 }, 00:15:11.521 { 00:15:11.521 "uuid": "00432e41-6d8d-4596-afe4-ff6f3d603d30", 00:15:11.521 "name": "lvs_n_0", 00:15:11.521 "base_bdev": "94be7ad7-606c-4d24-9e6f-b65cd05c3549", 00:15:11.521 "total_data_clusters": 1276, 00:15:11.521 "free_clusters": 1276, 00:15:11.521 "block_size": 4096, 00:15:11.521 "cluster_size": 4194304 00:15:11.521 } 00:15:11.521 ]' 00:15:11.521 06:39:51 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="00432e41-6d8d-4596-afe4-ff6f3d603d30") .free_clusters' 00:15:11.779 06:39:51 -- common/autotest_common.sh@1348 -- # fc=1276 00:15:11.779 06:39:51 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="00432e41-6d8d-4596-afe4-ff6f3d603d30") .cluster_size' 00:15:11.780 5104 00:15:11.780 06:39:51 -- common/autotest_common.sh@1349 -- # cs=4194304 00:15:11.780 06:39:51 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:15:11.780 06:39:51 -- common/autotest_common.sh@1353 -- # echo 5104 00:15:11.780 06:39:51 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:15:11.780 06:39:51 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 00432e41-6d8d-4596-afe4-ff6f3d603d30 lbd_nest_0 5104 00:15:12.039 06:39:51 -- host/perf.sh@88 -- # lb_nested_guid=cdbb4e2e-8db0-43e6-8f9b-de2081ff6ad8 00:15:12.039 06:39:51 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:12.297 06:39:52 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:15:12.297 06:39:52 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 cdbb4e2e-8db0-43e6-8f9b-de2081ff6ad8 00:15:12.556 06:39:52 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.814 06:39:52 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:15:12.814 06:39:52 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:15:12.814 06:39:52 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:12.814 06:39:52 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:12.814 06:39:52 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:13.073 No valid NVMe controllers or AIO or URING devices found 00:15:13.073 Initializing NVMe Controllers 00:15:13.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.073 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:13.073 WARNING: Some requested NVMe devices were skipped 00:15:13.073 06:39:52 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:13.073 06:39:52 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:25.318 Initializing NVMe Controllers 00:15:25.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:25.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:25.318 Initialization complete. Launching workers. 00:15:25.318 ======================================================== 00:15:25.318 Latency(us) 00:15:25.318 Device Information : IOPS MiB/s Average min max 00:15:25.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 972.40 121.55 1027.90 325.69 8121.58 00:15:25.318 ======================================================== 00:15:25.318 Total : 972.40 121.55 1027.90 325.69 8121.58 00:15:25.318 00:15:25.318 06:40:03 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:25.318 06:40:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:25.318 06:40:03 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:25.318 No valid NVMe controllers or AIO or URING devices found 00:15:25.318 Initializing NVMe Controllers 00:15:25.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:25.318 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:25.318 WARNING: Some requested NVMe devices were skipped 00:15:25.318 06:40:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:25.318 06:40:03 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:35.297 Initializing NVMe Controllers 00:15:35.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:35.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:35.297 Initialization complete. Launching workers. 00:15:35.297 ======================================================== 00:15:35.297 Latency(us) 00:15:35.297 Device Information : IOPS MiB/s Average min max 00:15:35.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1338.03 167.25 23925.84 7506.93 71748.19 00:15:35.297 ======================================================== 00:15:35.297 Total : 1338.03 167.25 23925.84 7506.93 71748.19 00:15:35.297 00:15:35.297 06:40:13 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:35.297 06:40:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:35.297 06:40:13 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:35.297 No valid NVMe controllers or AIO or URING devices found 00:15:35.297 Initializing NVMe Controllers 00:15:35.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:35.297 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:35.297 WARNING: Some requested NVMe devices were skipped 00:15:35.297 06:40:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:35.297 06:40:13 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:45.288 Initializing NVMe Controllers 00:15:45.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:45.288 Controller IO queue size 128, less than required. 00:15:45.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:45.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:45.288 Initialization complete. Launching workers. 00:15:45.288 ======================================================== 00:15:45.288 Latency(us) 00:15:45.288 Device Information : IOPS MiB/s Average min max 00:15:45.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4081.08 510.14 31387.35 9399.62 69719.65 00:15:45.288 ======================================================== 00:15:45.288 Total : 4081.08 510.14 31387.35 9399.62 69719.65 00:15:45.288 00:15:45.288 06:40:24 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.288 06:40:24 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete cdbb4e2e-8db0-43e6-8f9b-de2081ff6ad8 00:15:45.288 06:40:24 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:45.288 06:40:25 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 94be7ad7-606c-4d24-9e6f-b65cd05c3549 00:15:45.546 06:40:25 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:45.804 06:40:25 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:45.804 06:40:25 -- host/perf.sh@114 -- # nvmftestfini 00:15:45.804 06:40:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:45.804 06:40:25 -- nvmf/common.sh@116 -- # sync 00:15:45.804 06:40:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:45.804 06:40:25 -- nvmf/common.sh@119 -- # set +e 00:15:45.804 06:40:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:45.804 06:40:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:45.804 rmmod nvme_tcp 00:15:45.804 rmmod nvme_fabrics 00:15:46.062 rmmod nvme_keyring 00:15:46.062 06:40:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:46.062 06:40:25 -- nvmf/common.sh@123 -- # set -e 00:15:46.062 06:40:25 -- nvmf/common.sh@124 -- # return 0 00:15:46.062 06:40:25 -- nvmf/common.sh@477 -- # '[' -n 80254 ']' 00:15:46.062 06:40:25 -- nvmf/common.sh@478 -- # killprocess 80254 00:15:46.062 06:40:25 -- common/autotest_common.sh@926 -- # '[' -z 80254 ']' 00:15:46.062 06:40:25 -- common/autotest_common.sh@930 -- # kill -0 80254 00:15:46.062 06:40:25 -- common/autotest_common.sh@931 -- # uname 00:15:46.062 06:40:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:46.062 06:40:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80254 00:15:46.062 killing process with pid 80254 00:15:46.062 06:40:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:46.062 06:40:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:46.062 06:40:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80254' 00:15:46.062 06:40:25 -- common/autotest_common.sh@945 -- # kill 80254 00:15:46.062 06:40:25 -- common/autotest_common.sh@950 -- # wait 80254 00:15:47.436 06:40:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:47.436 06:40:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:47.436 06:40:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:47.436 06:40:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.436 06:40:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:47.436 06:40:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.436 06:40:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.436 06:40:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.436 06:40:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:47.436 ************************************ 00:15:47.436 END TEST nvmf_perf 00:15:47.436 ************************************ 00:15:47.436 00:15:47.436 real 0m49.621s 00:15:47.436 user 3m5.156s 00:15:47.436 sys 0m13.218s 00:15:47.436 06:40:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.436 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:15:47.436 06:40:27 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:47.436 06:40:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:47.436 06:40:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:47.436 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:15:47.436 ************************************ 00:15:47.436 START TEST nvmf_fio_host 00:15:47.436 ************************************ 00:15:47.436 06:40:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:47.436 * Looking for test storage... 00:15:47.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:47.436 06:40:27 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:47.436 06:40:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.436 06:40:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.436 06:40:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.436 06:40:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.436 06:40:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.436 06:40:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.436 06:40:27 -- paths/export.sh@5 -- # export PATH 00:15:47.436 06:40:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.436 06:40:27 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:47.436 06:40:27 -- nvmf/common.sh@7 -- # uname -s 00:15:47.436 06:40:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.436 06:40:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.436 06:40:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.436 06:40:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.436 06:40:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.436 06:40:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.436 06:40:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.436 06:40:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.436 06:40:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.436 06:40:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.436 06:40:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:15:47.436 06:40:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:15:47.436 06:40:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.436 06:40:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.436 06:40:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:47.436 06:40:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:47.436 06:40:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.436 06:40:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.436 06:40:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.436 06:40:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.436 06:40:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.436 06:40:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.436 06:40:27 -- paths/export.sh@5 -- # export PATH 00:15:47.436 06:40:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.436 06:40:27 -- nvmf/common.sh@46 -- # : 0 00:15:47.436 06:40:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:47.436 06:40:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:47.436 06:40:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:47.436 06:40:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.436 06:40:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.436 06:40:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:47.436 06:40:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:47.436 06:40:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:47.436 06:40:27 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.436 06:40:27 -- host/fio.sh@14 -- # nvmftestinit 00:15:47.436 06:40:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:47.436 06:40:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.436 06:40:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:47.436 06:40:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:47.436 06:40:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:47.436 06:40:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.436 06:40:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.436 06:40:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.436 06:40:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:47.436 06:40:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:47.436 06:40:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:47.436 06:40:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:47.436 06:40:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:47.436 06:40:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:47.436 06:40:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.436 06:40:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.436 06:40:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:47.436 06:40:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:47.436 06:40:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:47.436 06:40:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:47.436 06:40:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:47.436 06:40:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.436 06:40:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:47.436 06:40:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:47.436 06:40:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:47.436 06:40:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:47.436 06:40:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:47.436 06:40:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:47.436 Cannot find device "nvmf_tgt_br" 00:15:47.436 06:40:27 -- nvmf/common.sh@154 -- # true 00:15:47.436 06:40:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.436 Cannot find device "nvmf_tgt_br2" 00:15:47.436 06:40:27 -- nvmf/common.sh@155 -- # true 00:15:47.436 06:40:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:47.436 06:40:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:47.436 Cannot find device "nvmf_tgt_br" 00:15:47.436 06:40:27 -- nvmf/common.sh@157 -- # true 00:15:47.436 06:40:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:47.436 Cannot find device "nvmf_tgt_br2" 00:15:47.436 06:40:27 -- nvmf/common.sh@158 -- # true 00:15:47.436 06:40:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:47.694 06:40:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:47.695 06:40:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.695 06:40:27 -- nvmf/common.sh@161 -- # true 00:15:47.695 06:40:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.695 06:40:27 -- nvmf/common.sh@162 -- # true 00:15:47.695 06:40:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:47.695 06:40:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:47.695 06:40:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:47.695 06:40:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:47.695 06:40:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:47.695 06:40:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:47.695 06:40:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:47.695 06:40:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:47.695 06:40:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:47.695 06:40:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:47.695 06:40:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:47.695 06:40:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:47.695 06:40:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:47.695 06:40:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:47.695 06:40:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:47.695 06:40:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:47.695 06:40:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:47.695 06:40:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:47.695 06:40:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:47.695 06:40:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:47.695 06:40:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:47.695 06:40:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:47.695 06:40:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:47.695 06:40:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:47.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:47.695 00:15:47.695 --- 10.0.0.2 ping statistics --- 00:15:47.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.695 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:47.695 06:40:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:47.695 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:47.695 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:47.695 00:15:47.695 --- 10.0.0.3 ping statistics --- 00:15:47.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.695 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:47.695 06:40:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:47.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:47.695 00:15:47.695 --- 10.0.0.1 ping statistics --- 00:15:47.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.695 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:47.695 06:40:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.695 06:40:27 -- nvmf/common.sh@421 -- # return 0 00:15:47.695 06:40:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:47.695 06:40:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.695 06:40:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:47.695 06:40:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:47.695 06:40:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.695 06:40:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:47.695 06:40:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:47.954 06:40:27 -- host/fio.sh@16 -- # [[ y != y ]] 00:15:47.954 06:40:27 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:47.954 06:40:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:47.954 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:15:47.954 06:40:27 -- host/fio.sh@24 -- # nvmfpid=81066 00:15:47.954 06:40:27 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:47.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.955 06:40:27 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:47.955 06:40:27 -- host/fio.sh@28 -- # waitforlisten 81066 00:15:47.955 06:40:27 -- common/autotest_common.sh@819 -- # '[' -z 81066 ']' 00:15:47.955 06:40:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.955 06:40:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:47.955 06:40:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.955 06:40:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:47.955 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:15:47.955 [2024-07-12 06:40:27.685327] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:47.955 [2024-07-12 06:40:27.685670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.955 [2024-07-12 06:40:27.831126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.955 [2024-07-12 06:40:27.872104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:47.955 [2024-07-12 06:40:27.872479] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.955 [2024-07-12 06:40:27.872667] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.955 [2024-07-12 06:40:27.872894] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.955 [2024-07-12 06:40:27.873152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.955 [2024-07-12 06:40:27.875991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.214 [2024-07-12 06:40:27.876171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:48.214 [2024-07-12 06:40:27.876180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.147 06:40:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:49.147 06:40:28 -- common/autotest_common.sh@852 -- # return 0 00:15:49.147 06:40:28 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:49.147 [2024-07-12 06:40:28.958165] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.147 06:40:28 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:49.147 06:40:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:49.147 06:40:28 -- common/autotest_common.sh@10 -- # set +x 00:15:49.147 06:40:29 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:49.405 Malloc1 00:15:49.405 06:40:29 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:49.662 06:40:29 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:49.920 06:40:29 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.178 [2024-07-12 06:40:30.000977] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.178 06:40:30 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:50.436 06:40:30 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:50.436 06:40:30 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:50.436 06:40:30 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:50.436 06:40:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:50.436 06:40:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:50.436 06:40:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:50.436 06:40:30 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:50.436 06:40:30 -- common/autotest_common.sh@1320 -- # shift 00:15:50.436 06:40:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:50.436 06:40:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:50.436 06:40:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:50.436 06:40:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:50.436 06:40:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:50.436 06:40:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:50.436 06:40:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:50.436 06:40:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:50.436 06:40:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:50.436 06:40:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:15:50.436 06:40:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:50.436 06:40:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:50.436 06:40:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:50.436 06:40:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:50.436 06:40:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:50.694 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:50.694 fio-3.35 00:15:50.694 Starting 1 thread 00:15:53.223 00:15:53.223 test: (groupid=0, jobs=1): err= 0: pid=81144: Fri Jul 12 06:40:32 2024 00:15:53.223 read: IOPS=9002, BW=35.2MiB/s (36.9MB/s)(70.6MiB/2007msec) 00:15:53.223 slat (nsec): min=1999, max=204655, avg=2626.90, stdev=2173.55 00:15:53.223 clat (usec): min=2891, max=13838, avg=7387.30, stdev=563.76 00:15:53.223 lat (usec): min=2897, max=13840, avg=7389.93, stdev=563.64 00:15:53.223 clat percentiles (usec): 00:15:53.223 | 1.00th=[ 6194], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 6980], 00:15:53.223 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:15:53.223 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8160], 00:15:53.223 | 99.00th=[ 8586], 99.50th=[ 9765], 99.90th=[11731], 99.95th=[12911], 00:15:53.223 | 99.99th=[13829] 00:15:53.223 bw ( KiB/s): min=35544, max=36880, per=99.97%, avg=35998.00, stdev=612.87, samples=4 00:15:53.223 iops : min= 8886, max= 9220, avg=8999.50, stdev=153.57, samples=4 00:15:53.223 write: IOPS=9021, BW=35.2MiB/s (37.0MB/s)(70.7MiB/2007msec); 0 zone resets 00:15:53.223 slat (usec): min=2, max=4031, avg= 2.96, stdev=29.99 00:15:53.223 clat (usec): min=2427, max=12946, avg=6750.95, stdev=519.55 00:15:53.223 lat (usec): min=2453, max=12948, avg=6753.91, stdev=518.58 00:15:53.223 clat percentiles (usec): 00:15:53.223 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6390], 00:15:53.223 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6849], 00:15:53.223 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7242], 95.00th=[ 7439], 00:15:53.223 | 99.00th=[ 7767], 99.50th=[ 8455], 99.90th=[11076], 99.95th=[11994], 00:15:53.223 | 99.99th=[12911] 00:15:53.223 bw ( KiB/s): min=35840, max=36288, per=100.00%, avg=36098.00, stdev=204.10, samples=4 00:15:53.223 iops : min= 8960, max= 9072, avg=9024.50, stdev=51.03, samples=4 00:15:53.223 lat (msec) : 4=0.08%, 10=99.52%, 20=0.40% 00:15:53.223 cpu : usr=68.25%, sys=23.03%, ctx=7, majf=0, minf=5 00:15:53.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:53.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:53.223 issued rwts: total=18068,18107,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:53.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:53.223 00:15:53.223 Run status group 0 (all jobs): 00:15:53.223 READ: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.6MiB (74.0MB), run=2007-2007msec 00:15:53.223 WRITE: bw=35.2MiB/s (37.0MB/s), 35.2MiB/s-35.2MiB/s (37.0MB/s-37.0MB/s), io=70.7MiB (74.2MB), run=2007-2007msec 00:15:53.224 06:40:32 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:53.224 06:40:32 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:53.224 06:40:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:53.224 06:40:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:53.224 06:40:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:53.224 06:40:32 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:53.224 06:40:32 -- common/autotest_common.sh@1320 -- # shift 00:15:53.224 06:40:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:53.224 06:40:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:53.224 06:40:32 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:53.224 06:40:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:53.224 06:40:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:53.224 06:40:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:53.224 06:40:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:53.224 06:40:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:53.224 06:40:32 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:53.224 06:40:32 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:15:53.224 06:40:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:53.224 06:40:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:53.224 06:40:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:53.224 06:40:32 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:53.224 06:40:32 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:53.224 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:53.224 fio-3.35 00:15:53.224 Starting 1 thread 00:15:55.751 00:15:55.751 test: (groupid=0, jobs=1): err= 0: pid=81197: Fri Jul 12 06:40:35 2024 00:15:55.751 read: IOPS=8078, BW=126MiB/s (132MB/s)(253MiB/2008msec) 00:15:55.751 slat (usec): min=3, max=131, avg= 4.12, stdev= 2.33 00:15:55.751 clat (usec): min=1696, max=18566, avg=8730.26, stdev=2841.05 00:15:55.751 lat (usec): min=1699, max=18569, avg=8734.38, stdev=2841.17 00:15:55.751 clat percentiles (usec): 00:15:55.751 | 1.00th=[ 4178], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6259], 00:15:55.751 | 30.00th=[ 6849], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 8979], 00:15:55.751 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[12518], 95.00th=[14091], 00:15:55.751 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[18220], 00:15:55.751 | 99.99th=[18482] 00:15:55.751 bw ( KiB/s): min=62336, max=70016, per=50.57%, avg=65368.00, stdev=3277.08, samples=4 00:15:55.751 iops : min= 3896, max= 4376, avg=4085.50, stdev=204.82, samples=4 00:15:55.751 write: IOPS=4564, BW=71.3MiB/s (74.8MB/s)(133MiB/1869msec); 0 zone resets 00:15:55.751 slat (usec): min=34, max=236, avg=42.10, stdev= 7.98 00:15:55.751 clat (usec): min=4921, max=19752, avg=12615.86, stdev=2159.40 00:15:55.751 lat (usec): min=4959, max=19823, avg=12657.97, stdev=2161.16 00:15:55.751 clat percentiles (usec): 00:15:55.751 | 1.00th=[ 8029], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10683], 00:15:55.751 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12518], 60.00th=[13042], 00:15:55.751 | 70.00th=[13566], 80.00th=[14353], 90.00th=[15533], 95.00th=[16450], 00:15:55.751 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:15:55.751 | 99.99th=[19792] 00:15:55.751 bw ( KiB/s): min=63232, max=73696, per=92.99%, avg=67912.00, stdev=4437.77, samples=4 00:15:55.751 iops : min= 3952, max= 4606, avg=4244.50, stdev=277.36, samples=4 00:15:55.751 lat (msec) : 2=0.01%, 4=0.33%, 10=49.31%, 20=50.35% 00:15:55.751 cpu : usr=77.23%, sys=16.79%, ctx=6, majf=0, minf=1 00:15:55.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:55.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:55.752 issued rwts: total=16221,8531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:55.752 00:15:55.752 Run status group 0 (all jobs): 00:15:55.752 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=253MiB (266MB), run=2008-2008msec 00:15:55.752 WRITE: bw=71.3MiB/s (74.8MB/s), 71.3MiB/s-71.3MiB/s (74.8MB/s-74.8MB/s), io=133MiB (140MB), run=1869-1869msec 00:15:55.752 06:40:35 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.752 06:40:35 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:15:55.752 06:40:35 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:15:55.752 06:40:35 -- host/fio.sh@51 -- # get_nvme_bdfs 00:15:55.752 06:40:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:55.752 06:40:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:15:55.752 06:40:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:55.752 06:40:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:55.752 06:40:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:55.752 06:40:35 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:15:55.752 06:40:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:15:55.752 06:40:35 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:15:56.010 Nvme0n1 00:15:56.010 06:40:35 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:15:56.268 06:40:36 -- host/fio.sh@53 -- # ls_guid=9566dc85-94d4-4509-bc2c-ff69248c6294 00:15:56.269 06:40:36 -- host/fio.sh@54 -- # get_lvs_free_mb 9566dc85-94d4-4509-bc2c-ff69248c6294 00:15:56.269 06:40:36 -- common/autotest_common.sh@1343 -- # local lvs_uuid=9566dc85-94d4-4509-bc2c-ff69248c6294 00:15:56.269 06:40:36 -- common/autotest_common.sh@1344 -- # local lvs_info 00:15:56.269 06:40:36 -- common/autotest_common.sh@1345 -- # local fc 00:15:56.269 06:40:36 -- common/autotest_common.sh@1346 -- # local cs 00:15:56.269 06:40:36 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:56.527 06:40:36 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:15:56.527 { 00:15:56.527 "uuid": "9566dc85-94d4-4509-bc2c-ff69248c6294", 00:15:56.527 "name": "lvs_0", 00:15:56.527 "base_bdev": "Nvme0n1", 00:15:56.527 "total_data_clusters": 4, 00:15:56.527 "free_clusters": 4, 00:15:56.527 "block_size": 4096, 00:15:56.527 "cluster_size": 1073741824 00:15:56.527 } 00:15:56.527 ]' 00:15:56.527 06:40:36 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="9566dc85-94d4-4509-bc2c-ff69248c6294") .free_clusters' 00:15:56.527 06:40:36 -- common/autotest_common.sh@1348 -- # fc=4 00:15:56.527 06:40:36 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="9566dc85-94d4-4509-bc2c-ff69248c6294") .cluster_size' 00:15:56.785 4096 00:15:56.785 06:40:36 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:15:56.785 06:40:36 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:15:56.785 06:40:36 -- common/autotest_common.sh@1353 -- # echo 4096 00:15:56.785 06:40:36 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:15:56.785 4c31410e-7677-4795-8140-5919b7d1a8eb 00:15:56.785 06:40:36 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:15:57.044 06:40:36 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:15:57.304 06:40:37 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:57.562 06:40:37 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:57.562 06:40:37 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:57.562 06:40:37 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:57.562 06:40:37 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:57.562 06:40:37 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:57.562 06:40:37 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:57.562 06:40:37 -- common/autotest_common.sh@1320 -- # shift 00:15:57.562 06:40:37 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:57.562 06:40:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.562 06:40:37 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:57.562 06:40:37 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:57.562 06:40:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:57.562 06:40:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:57.562 06:40:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:57.562 06:40:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.562 06:40:37 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:57.562 06:40:37 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:15:57.562 06:40:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:57.562 06:40:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:57.562 06:40:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:57.562 06:40:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:57.562 06:40:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:57.820 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:57.820 fio-3.35 00:15:57.820 Starting 1 thread 00:16:00.416 00:16:00.416 test: (groupid=0, jobs=1): err= 0: pid=81301: Fri Jul 12 06:40:39 2024 00:16:00.416 read: IOPS=6657, BW=26.0MiB/s (27.3MB/s)(52.2MiB/2008msec) 00:16:00.416 slat (usec): min=2, max=241, avg= 2.72, stdev= 2.66 00:16:00.416 clat (usec): min=2663, max=17831, avg=10015.04, stdev=837.49 00:16:00.416 lat (usec): min=2670, max=17834, avg=10017.76, stdev=837.27 00:16:00.416 clat percentiles (usec): 00:16:00.416 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:16:00.416 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:16:00.416 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11338], 00:16:00.416 | 99.00th=[11863], 99.50th=[12256], 99.90th=[15664], 99.95th=[16909], 00:16:00.416 | 99.99th=[17695] 00:16:00.416 bw ( KiB/s): min=25472, max=27248, per=99.90%, avg=26604.00, stdev=789.87, samples=4 00:16:00.416 iops : min= 6368, max= 6812, avg=6651.00, stdev=197.47, samples=4 00:16:00.416 write: IOPS=6664, BW=26.0MiB/s (27.3MB/s)(52.3MiB/2008msec); 0 zone resets 00:16:00.416 slat (usec): min=2, max=173, avg= 2.85, stdev= 1.76 00:16:00.416 clat (usec): min=1808, max=16566, avg=9094.12, stdev=776.52 00:16:00.416 lat (usec): min=1819, max=16569, avg=9096.96, stdev=776.45 00:16:00.416 clat percentiles (usec): 00:16:00.416 | 1.00th=[ 7439], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8455], 00:16:00.416 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:16:00.416 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10290], 00:16:00.416 | 99.00th=[10814], 99.50th=[11207], 99.90th=[14353], 99.95th=[15401], 00:16:00.416 | 99.99th=[16581] 00:16:00.416 bw ( KiB/s): min=26432, max=26880, per=99.94%, avg=26642.00, stdev=190.03, samples=4 00:16:00.416 iops : min= 6608, max= 6720, avg=6660.50, stdev=47.51, samples=4 00:16:00.416 lat (msec) : 2=0.01%, 4=0.09%, 10=70.05%, 20=29.86% 00:16:00.416 cpu : usr=70.20%, sys=23.12%, ctx=26, majf=0, minf=5 00:16:00.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:00.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.416 issued rwts: total=13368,13382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.416 00:16:00.416 Run status group 0 (all jobs): 00:16:00.416 READ: bw=26.0MiB/s (27.3MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=52.2MiB (54.8MB), run=2008-2008msec 00:16:00.416 WRITE: bw=26.0MiB/s (27.3MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=52.3MiB (54.8MB), run=2008-2008msec 00:16:00.416 06:40:39 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:00.416 06:40:40 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:16:00.716 06:40:40 -- host/fio.sh@64 -- # ls_nested_guid=a248f7f1-e9f4-41a2-9302-07d10ea53a7b 00:16:00.716 06:40:40 -- host/fio.sh@65 -- # get_lvs_free_mb a248f7f1-e9f4-41a2-9302-07d10ea53a7b 00:16:00.716 06:40:40 -- common/autotest_common.sh@1343 -- # local lvs_uuid=a248f7f1-e9f4-41a2-9302-07d10ea53a7b 00:16:00.716 06:40:40 -- common/autotest_common.sh@1344 -- # local lvs_info 00:16:00.716 06:40:40 -- common/autotest_common.sh@1345 -- # local fc 00:16:00.716 06:40:40 -- common/autotest_common.sh@1346 -- # local cs 00:16:00.716 06:40:40 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:00.716 06:40:40 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:16:00.716 { 00:16:00.716 "uuid": "9566dc85-94d4-4509-bc2c-ff69248c6294", 00:16:00.716 "name": "lvs_0", 00:16:00.716 "base_bdev": "Nvme0n1", 00:16:00.716 "total_data_clusters": 4, 00:16:00.716 "free_clusters": 0, 00:16:00.716 "block_size": 4096, 00:16:00.716 "cluster_size": 1073741824 00:16:00.716 }, 00:16:00.716 { 00:16:00.716 "uuid": "a248f7f1-e9f4-41a2-9302-07d10ea53a7b", 00:16:00.716 "name": "lvs_n_0", 00:16:00.716 "base_bdev": "4c31410e-7677-4795-8140-5919b7d1a8eb", 00:16:00.716 "total_data_clusters": 1022, 00:16:00.716 "free_clusters": 1022, 00:16:00.716 "block_size": 4096, 00:16:00.716 "cluster_size": 4194304 00:16:00.716 } 00:16:00.716 ]' 00:16:00.716 06:40:40 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="a248f7f1-e9f4-41a2-9302-07d10ea53a7b") .free_clusters' 00:16:00.716 06:40:40 -- common/autotest_common.sh@1348 -- # fc=1022 00:16:00.716 06:40:40 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="a248f7f1-e9f4-41a2-9302-07d10ea53a7b") .cluster_size' 00:16:00.716 4088 00:16:00.716 06:40:40 -- common/autotest_common.sh@1349 -- # cs=4194304 00:16:00.716 06:40:40 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:16:00.716 06:40:40 -- common/autotest_common.sh@1353 -- # echo 4088 00:16:00.716 06:40:40 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:16:00.974 37bc3d73-60fc-4cca-8527-13e51fa07a51 00:16:00.974 06:40:40 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:16:01.232 06:40:41 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:16:01.490 06:40:41 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:01.749 06:40:41 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:01.749 06:40:41 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:01.749 06:40:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:01.749 06:40:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:01.749 06:40:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:01.749 06:40:41 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:01.749 06:40:41 -- common/autotest_common.sh@1320 -- # shift 00:16:01.749 06:40:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:01.749 06:40:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:01.749 06:40:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:01.749 06:40:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:01.749 06:40:41 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:01.749 06:40:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:16:01.749 06:40:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:16:01.749 06:40:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:01.749 06:40:41 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:01.749 06:40:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:16:01.749 06:40:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:01.749 06:40:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:16:01.749 06:40:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:16:01.749 06:40:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:01.749 06:40:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:02.007 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:02.007 fio-3.35 00:16:02.007 Starting 1 thread 00:16:04.537 00:16:04.537 test: (groupid=0, jobs=1): err= 0: pid=81374: Fri Jul 12 06:40:43 2024 00:16:04.537 read: IOPS=5907, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2009msec) 00:16:04.537 slat (usec): min=2, max=342, avg= 2.54, stdev= 3.87 00:16:04.537 clat (usec): min=3192, max=19936, avg=11317.89, stdev=948.33 00:16:04.537 lat (usec): min=3202, max=19939, avg=11320.43, stdev=947.97 00:16:04.537 clat percentiles (usec): 00:16:04.537 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:16:04.537 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:16:04.537 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12780], 00:16:04.537 | 99.00th=[13304], 99.50th=[13698], 99.90th=[18482], 99.95th=[19792], 00:16:04.537 | 99.99th=[19792] 00:16:04.537 bw ( KiB/s): min=22840, max=23984, per=99.93%, avg=23614.00, stdev=529.99, samples=4 00:16:04.537 iops : min= 5710, max= 5996, avg=5903.50, stdev=132.50, samples=4 00:16:04.537 write: IOPS=5906, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2009msec); 0 zone resets 00:16:04.537 slat (usec): min=2, max=275, avg= 2.61, stdev= 2.71 00:16:04.537 clat (usec): min=2453, max=19656, avg=10238.35, stdev=888.93 00:16:04.537 lat (usec): min=2466, max=19659, avg=10240.96, stdev=888.77 00:16:04.537 clat percentiles (usec): 00:16:04.537 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:16:04.537 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:16:04.537 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:16:04.537 | 99.00th=[12256], 99.50th=[12649], 99.90th=[16909], 99.95th=[17171], 00:16:04.537 | 99.99th=[18744] 00:16:04.537 bw ( KiB/s): min=23552, max=23680, per=99.90%, avg=23602.00, stdev=62.10, samples=4 00:16:04.537 iops : min= 5888, max= 5920, avg=5900.50, stdev=15.52, samples=4 00:16:04.537 lat (msec) : 4=0.06%, 10=22.08%, 20=77.86% 00:16:04.537 cpu : usr=73.51%, sys=21.12%, ctx=4, majf=0, minf=5 00:16:04.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:16:04.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.537 issued rwts: total=11869,11866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.537 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.537 00:16:04.537 Run status group 0 (all jobs): 00:16:04.537 READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.6MB), run=2009-2009msec 00:16:04.537 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.6MB), run=2009-2009msec 00:16:04.537 06:40:44 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:04.537 06:40:44 -- host/fio.sh@74 -- # sync 00:16:04.537 06:40:44 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:16:04.795 06:40:44 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:16:05.053 06:40:44 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:16:05.312 06:40:45 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:16:05.570 06:40:45 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:06.505 06:40:46 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:06.505 06:40:46 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:06.505 06:40:46 -- host/fio.sh@86 -- # nvmftestfini 00:16:06.505 06:40:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:06.505 06:40:46 -- nvmf/common.sh@116 -- # sync 00:16:06.505 06:40:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:06.505 06:40:46 -- nvmf/common.sh@119 -- # set +e 00:16:06.505 06:40:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:06.505 06:40:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:06.505 rmmod nvme_tcp 00:16:06.505 rmmod nvme_fabrics 00:16:06.505 rmmod nvme_keyring 00:16:06.505 06:40:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:06.505 06:40:46 -- nvmf/common.sh@123 -- # set -e 00:16:06.505 06:40:46 -- nvmf/common.sh@124 -- # return 0 00:16:06.505 06:40:46 -- nvmf/common.sh@477 -- # '[' -n 81066 ']' 00:16:06.505 06:40:46 -- nvmf/common.sh@478 -- # killprocess 81066 00:16:06.505 06:40:46 -- common/autotest_common.sh@926 -- # '[' -z 81066 ']' 00:16:06.505 06:40:46 -- common/autotest_common.sh@930 -- # kill -0 81066 00:16:06.505 06:40:46 -- common/autotest_common.sh@931 -- # uname 00:16:06.506 06:40:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:06.506 06:40:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81066 00:16:06.506 06:40:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:06.506 killing process with pid 81066 00:16:06.506 06:40:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:06.506 06:40:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81066' 00:16:06.506 06:40:46 -- common/autotest_common.sh@945 -- # kill 81066 00:16:06.506 06:40:46 -- common/autotest_common.sh@950 -- # wait 81066 00:16:06.763 06:40:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:06.763 06:40:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:06.763 06:40:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:06.764 06:40:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.764 06:40:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:06.764 06:40:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.764 06:40:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.764 06:40:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.764 06:40:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:06.764 ************************************ 00:16:06.764 END TEST nvmf_fio_host 00:16:06.764 ************************************ 00:16:06.764 00:16:06.764 real 0m19.437s 00:16:06.764 user 1m25.730s 00:16:06.764 sys 0m4.419s 00:16:06.764 06:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.764 06:40:46 -- common/autotest_common.sh@10 -- # set +x 00:16:06.764 06:40:46 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:06.764 06:40:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:06.764 06:40:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:06.764 06:40:46 -- common/autotest_common.sh@10 -- # set +x 00:16:06.764 ************************************ 00:16:06.764 START TEST nvmf_failover 00:16:06.764 ************************************ 00:16:06.764 06:40:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:07.022 * Looking for test storage... 00:16:07.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:07.022 06:40:46 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:07.022 06:40:46 -- nvmf/common.sh@7 -- # uname -s 00:16:07.022 06:40:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.022 06:40:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.022 06:40:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.022 06:40:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.022 06:40:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.022 06:40:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.022 06:40:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.022 06:40:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.022 06:40:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.022 06:40:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.022 06:40:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:16:07.022 06:40:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:16:07.022 06:40:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.022 06:40:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.022 06:40:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:07.022 06:40:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.022 06:40:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.022 06:40:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.022 06:40:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.022 06:40:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.022 06:40:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.022 06:40:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.022 06:40:46 -- paths/export.sh@5 -- # export PATH 00:16:07.023 06:40:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.023 06:40:46 -- nvmf/common.sh@46 -- # : 0 00:16:07.023 06:40:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:07.023 06:40:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:07.023 06:40:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:07.023 06:40:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.023 06:40:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.023 06:40:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:07.023 06:40:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:07.023 06:40:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:07.023 06:40:46 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:07.023 06:40:46 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:07.023 06:40:46 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.023 06:40:46 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:07.023 06:40:46 -- host/failover.sh@18 -- # nvmftestinit 00:16:07.023 06:40:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:07.023 06:40:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.023 06:40:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:07.023 06:40:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:07.023 06:40:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:07.023 06:40:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.023 06:40:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.023 06:40:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.023 06:40:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:07.023 06:40:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:07.023 06:40:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:07.023 06:40:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:07.023 06:40:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:07.023 06:40:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:07.023 06:40:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.023 06:40:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.023 06:40:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:07.023 06:40:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:07.023 06:40:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:07.023 06:40:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:07.023 06:40:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:07.023 06:40:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.023 06:40:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:07.023 06:40:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:07.023 06:40:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:07.023 06:40:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:07.023 06:40:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:07.023 06:40:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:07.023 Cannot find device "nvmf_tgt_br" 00:16:07.023 06:40:46 -- nvmf/common.sh@154 -- # true 00:16:07.023 06:40:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.023 Cannot find device "nvmf_tgt_br2" 00:16:07.023 06:40:46 -- nvmf/common.sh@155 -- # true 00:16:07.023 06:40:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:07.023 06:40:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:07.023 Cannot find device "nvmf_tgt_br" 00:16:07.023 06:40:46 -- nvmf/common.sh@157 -- # true 00:16:07.023 06:40:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:07.023 Cannot find device "nvmf_tgt_br2" 00:16:07.023 06:40:46 -- nvmf/common.sh@158 -- # true 00:16:07.023 06:40:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:07.023 06:40:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:07.023 06:40:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.023 06:40:46 -- nvmf/common.sh@161 -- # true 00:16:07.023 06:40:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.023 06:40:46 -- nvmf/common.sh@162 -- # true 00:16:07.023 06:40:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:07.023 06:40:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:07.023 06:40:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:07.023 06:40:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:07.023 06:40:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:07.023 06:40:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:07.023 06:40:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:07.282 06:40:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:07.282 06:40:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:07.282 06:40:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:07.282 06:40:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:07.282 06:40:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:07.282 06:40:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:07.282 06:40:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:07.282 06:40:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:07.282 06:40:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:07.282 06:40:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:07.282 06:40:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:07.282 06:40:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:07.282 06:40:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:07.282 06:40:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:07.282 06:40:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:07.282 06:40:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:07.282 06:40:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:07.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:07.282 00:16:07.282 --- 10.0.0.2 ping statistics --- 00:16:07.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.282 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:07.282 06:40:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:07.282 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:07.282 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:16:07.282 00:16:07.282 --- 10.0.0.3 ping statistics --- 00:16:07.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.282 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:07.282 06:40:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:07.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:07.282 00:16:07.282 --- 10.0.0.1 ping statistics --- 00:16:07.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.282 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:07.282 06:40:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.282 06:40:47 -- nvmf/common.sh@421 -- # return 0 00:16:07.282 06:40:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:07.282 06:40:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.282 06:40:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:07.282 06:40:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:07.282 06:40:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.282 06:40:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:07.282 06:40:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:07.282 06:40:47 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:07.282 06:40:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:07.282 06:40:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:07.282 06:40:47 -- common/autotest_common.sh@10 -- # set +x 00:16:07.282 06:40:47 -- nvmf/common.sh@469 -- # nvmfpid=81615 00:16:07.282 06:40:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:07.282 06:40:47 -- nvmf/common.sh@470 -- # waitforlisten 81615 00:16:07.282 06:40:47 -- common/autotest_common.sh@819 -- # '[' -z 81615 ']' 00:16:07.282 06:40:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.282 06:40:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:07.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.282 06:40:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.282 06:40:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:07.282 06:40:47 -- common/autotest_common.sh@10 -- # set +x 00:16:07.282 [2024-07-12 06:40:47.131121] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:07.282 [2024-07-12 06:40:47.131216] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.540 [2024-07-12 06:40:47.270430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:07.540 [2024-07-12 06:40:47.302243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:07.540 [2024-07-12 06:40:47.302663] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.540 [2024-07-12 06:40:47.302789] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.540 [2024-07-12 06:40:47.302920] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.540 [2024-07-12 06:40:47.303155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.540 [2024-07-12 06:40:47.303214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.540 [2024-07-12 06:40:47.303214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.472 06:40:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:08.472 06:40:48 -- common/autotest_common.sh@852 -- # return 0 00:16:08.472 06:40:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:08.472 06:40:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:08.472 06:40:48 -- common/autotest_common.sh@10 -- # set +x 00:16:08.472 06:40:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.472 06:40:48 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:08.472 [2024-07-12 06:40:48.370018] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.472 06:40:48 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:08.730 Malloc0 00:16:08.730 06:40:48 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:08.988 06:40:48 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:09.246 06:40:49 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.505 [2024-07-12 06:40:49.297237] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.505 06:40:49 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:09.763 [2024-07-12 06:40:49.513413] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:09.763 06:40:49 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:10.023 [2024-07-12 06:40:49.733582] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:10.023 06:40:49 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:10.023 06:40:49 -- host/failover.sh@31 -- # bdevperf_pid=81673 00:16:10.023 06:40:49 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:10.023 06:40:49 -- host/failover.sh@34 -- # waitforlisten 81673 /var/tmp/bdevperf.sock 00:16:10.023 06:40:49 -- common/autotest_common.sh@819 -- # '[' -z 81673 ']' 00:16:10.023 06:40:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.023 06:40:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:10.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.023 06:40:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.023 06:40:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:10.023 06:40:49 -- common/autotest_common.sh@10 -- # set +x 00:16:10.958 06:40:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:10.958 06:40:50 -- common/autotest_common.sh@852 -- # return 0 00:16:10.958 06:40:50 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:11.217 NVMe0n1 00:16:11.217 06:40:51 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:11.475 00:16:11.475 06:40:51 -- host/failover.sh@39 -- # run_test_pid=81698 00:16:11.475 06:40:51 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:11.475 06:40:51 -- host/failover.sh@41 -- # sleep 1 00:16:12.851 06:40:52 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.851 [2024-07-12 06:40:52.642491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 [2024-07-12 06:40:52.642781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12897d0 is same with the state(5) to be set 00:16:12.851 06:40:52 -- host/failover.sh@45 -- # sleep 3 00:16:16.139 06:40:55 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:16.139 00:16:16.140 06:40:56 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:16.398 [2024-07-12 06:40:56.240772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.398 [2024-07-12 06:40:56.240979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 [2024-07-12 06:40:56.241248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128a400 is same with the state(5) to be set 00:16:16.399 06:40:56 -- host/failover.sh@50 -- # sleep 3 00:16:19.683 06:40:59 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.683 [2024-07-12 06:40:59.521804] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.683 06:40:59 -- host/failover.sh@55 -- # sleep 1 00:16:20.636 06:41:00 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:20.931 [2024-07-12 06:41:00.801330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142dcc0 is same with the state(5) to be set 00:16:20.931 [2024-07-12 06:41:00.801386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142dcc0 is same with the state(5) to be set 00:16:20.932 [2024-07-12 06:41:00.801399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142dcc0 is same with the state(5) to be set 00:16:20.932 [2024-07-12 06:41:00.801408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142dcc0 is same with the state(5) to be set 00:16:20.932 [2024-07-12 06:41:00.801417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142dcc0 is same with the state(5) to be set 00:16:20.932 06:41:00 -- host/failover.sh@59 -- # wait 81698 00:16:27.507 0 00:16:27.507 06:41:06 -- host/failover.sh@61 -- # killprocess 81673 00:16:27.507 06:41:06 -- common/autotest_common.sh@926 -- # '[' -z 81673 ']' 00:16:27.507 06:41:06 -- common/autotest_common.sh@930 -- # kill -0 81673 00:16:27.507 06:41:06 -- common/autotest_common.sh@931 -- # uname 00:16:27.507 06:41:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:27.507 06:41:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81673 00:16:27.507 killing process with pid 81673 00:16:27.507 06:41:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:27.507 06:41:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:27.507 06:41:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81673' 00:16:27.507 06:41:06 -- common/autotest_common.sh@945 -- # kill 81673 00:16:27.507 06:41:06 -- common/autotest_common.sh@950 -- # wait 81673 00:16:27.507 06:41:06 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:27.507 [2024-07-12 06:40:49.792076] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:27.507 [2024-07-12 06:40:49.792262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81673 ] 00:16:27.507 [2024-07-12 06:40:49.928008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.507 [2024-07-12 06:40:49.961089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.507 Running I/O for 15 seconds... 00:16:27.507 [2024-07-12 06:40:52.644023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.507 [2024-07-12 06:40:52.644341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.644463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.644558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.644642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.644735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.644819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.644900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.645004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.645097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.645167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.645247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.645328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.645417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.645492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.645570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.645644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.645729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.645806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.645893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.645985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.646078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.646183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.646266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.646332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.646416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.646497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.646586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.646687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.646767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.646844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.646927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.647023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.647122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.647205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.647285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.647361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.647441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.647515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.647600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.647675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.647755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.647853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.647939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.648041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.508 [2024-07-12 06:40:52.648124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.648199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.508 [2024-07-12 06:40:52.648307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.648387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.648464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.648538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.508 [2024-07-12 06:40:52.648619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.648685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.508 [2024-07-12 06:40:52.648762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.648847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.508 [2024-07-12 06:40:52.648929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.649033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.508 [2024-07-12 06:40:52.649115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.649190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.508 [2024-07-12 06:40:52.649268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.649353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.508 [2024-07-12 06:40:52.649430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.649512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.508 [2024-07-12 06:40:52.649594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.649660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.649737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.649812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.649892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.649984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.650074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.650151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.650232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.650322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.650411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.650478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.650558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.650650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.650745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.650821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.650895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.650993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.651096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.651173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.508 [2024-07-12 06:40:52.651253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.508 [2024-07-12 06:40:52.651337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.508 [2024-07-12 06:40:52.651418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.651492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.651585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.651660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.651782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.651851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.651934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.652034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.652121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.652202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.652275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.652354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.652451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.652527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.652608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.652683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.652757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.652833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.652911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.653019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.653110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.653179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.653261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.653334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.653417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.653492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.653573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.653649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.653734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.653810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.653895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.653977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.654072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.654150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.654232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.654313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.654402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.654468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.654567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.654659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.654729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.654803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.654885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.654988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.655076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.655159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.655245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.655314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.655388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.655455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.655538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.655612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.655691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.655758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.655835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.655909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.656006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.656088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.656175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.656250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.656334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.656410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.656484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.656573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.656648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.656723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.656813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.656894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.656999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.657088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.657175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.657242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.657319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.657395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.657473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.657548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.657634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.657709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.657787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.657861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.657941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.658030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.509 [2024-07-12 06:40:52.658112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.658188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.509 [2024-07-12 06:40:52.658276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.509 [2024-07-12 06:40:52.658363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.658450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.658526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.658632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.658715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.658794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.658886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.658978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.659052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.659137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.659214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.659298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.659378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.659465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.659542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.659617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.659684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.659759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.659825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.659898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.659988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.660080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.660157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.660236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.660310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.660388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.660462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.660543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.660632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.660713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.660789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.660875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.660942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.661060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.661129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.661212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.661287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.661371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.661447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.661524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.661608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.661696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.661776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.661862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.661938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.662044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.662113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.662198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.662274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.662350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.662416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.662499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.662590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.662709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.662779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.662866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.662944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.663032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.663111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.663213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.663294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.663374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.663451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.663528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.663603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.663679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.663753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.663832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.663919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.664028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.664110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.664190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.664271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.664357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.664423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.664504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.664579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.510 [2024-07-12 06:40:52.664659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.664755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.664855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.664922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.510 [2024-07-12 06:40:52.665023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.510 [2024-07-12 06:40:52.665109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:52.665189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:52.665265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e533d0 is same with the state(5) to be set 00:16:27.511 [2024-07-12 06:40:52.665351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:27.511 [2024-07-12 06:40:52.665414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:27.511 [2024-07-12 06:40:52.665499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118912 len:8 PRP1 0x0 PRP2 0x0 00:16:27.511 [2024-07-12 06:40:52.665577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:52.665687] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e533d0 was disconnected and freed. reset controller. 00:16:27.511 [2024-07-12 06:40:52.665770] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:27.511 [2024-07-12 06:40:52.665915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.511 [2024-07-12 06:40:52.666032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:52.666128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.511 [2024-07-12 06:40:52.666210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:52.666283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.511 [2024-07-12 06:40:52.666359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:52.666432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.511 [2024-07-12 06:40:52.666506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:52.666579] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:27.511 [2024-07-12 06:40:52.666746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2bc80 (9): Bad file descriptor 00:16:27.511 [2024-07-12 06:40:52.669487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:27.511 [2024-07-12 06:40:52.699546] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:27.511 [2024-07-12 06:40:56.240616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.511 [2024-07-12 06:40:56.242852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.242993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.511 [2024-07-12 06:40:56.243071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.243139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.511 [2024-07-12 06:40:56.243249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.243323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.511 [2024-07-12 06:40:56.243404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.243468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2bc80 is same with the state(5) to be set 00:16:27.511 [2024-07-12 06:40:56.243615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.243698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.243798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.243883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.243988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.244093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.244166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.244279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.244354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.244428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.244494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.244571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.244637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.244719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.244794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.244868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.244934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.245027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.245101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.245200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.245283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.245362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.245437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.511 [2024-07-12 06:40:56.245516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.245584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.245665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.245742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.511 [2024-07-12 06:40:56.245829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.245904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.511 [2024-07-12 06:40:56.245987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.246077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.246155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.246222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.246305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.246372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.246455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.246521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.246615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.246704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.246780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.246846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.246920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.247013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.247103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.247184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.511 [2024-07-12 06:40:56.247262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.511 [2024-07-12 06:40:56.247336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.247420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.247496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.247574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.247655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.247737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.247804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.512 [2024-07-12 06:40:56.247882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.247909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.247924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.247940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.247967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.247985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.247999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.512 [2024-07-12 06:40:56.248114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.512 [2024-07-12 06:40:56.248475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.512 [2024-07-12 06:40:56.248503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.512 [2024-07-12 06:40:56.248539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.512 [2024-07-12 06:40:56.248568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.512 [2024-07-12 06:40:56.248625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.512 [2024-07-12 06:40:56.248682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.512 [2024-07-12 06:40:56.248710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.512 [2024-07-12 06:40:56.248725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.512 [2024-07-12 06:40:56.248738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.248753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.248766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.248781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.248795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.248810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.248823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.248838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.248851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.248867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.248881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.248902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.248917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.248932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.248946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.251252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.251334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.251402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.251467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.251541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.251619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.251702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.251777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.251843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.251919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.252012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.252100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.252168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.252248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.252315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.252380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.252444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.252583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.252654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.252732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.252797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.252896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.252990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.253081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.253151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.253227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.253303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.253381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.253447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.253530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.253606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.253672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.253747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.253830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.253910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.253998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.254086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.254167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.254243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.254319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.254395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.254471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.254547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.254642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.254724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.254803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.254885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.254974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.255136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.255293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.255438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.255583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.255726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.255768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.513 [2024-07-12 06:40:56.255797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.255826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.255855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.255884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.255913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.513 [2024-07-12 06:40:56.255928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.513 [2024-07-12 06:40:56.255964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.255984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.255998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.514 [2024-07-12 06:40:56.256113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.514 [2024-07-12 06:40:56.256198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.514 [2024-07-12 06:40:56.256227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.514 [2024-07-12 06:40:56.256284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.514 [2024-07-12 06:40:56.256312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.514 [2024-07-12 06:40:56.256433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.514 [2024-07-12 06:40:56.256490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.514 [2024-07-12 06:40:56.256518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.514 [2024-07-12 06:40:56.256724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:40:56.256809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:27.514 [2024-07-12 06:40:56.256876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:27.514 [2024-07-12 06:40:56.256888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87992 len:8 PRP1 0x0 PRP2 0x0 00:16:27.514 [2024-07-12 06:40:56.256901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:40:56.256949] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e4ab80 was disconnected and freed. reset controller. 00:16:27.514 [2024-07-12 06:40:56.256981] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:27.514 [2024-07-12 06:40:56.256997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:27.514 [2024-07-12 06:40:56.257058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2bc80 (9): Bad file descriptor 00:16:27.514 [2024-07-12 06:40:56.259485] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:27.514 [2024-07-12 06:40:56.288817] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:27.514 [2024-07-12 06:41:00.801504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:41:00.801574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:41:00.801602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:41:00.801618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:41:00.801635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:41:00.801649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:41:00.801665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:41:00.801679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:41:00.801694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:41:00.801728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:41:00.801745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:41:00.801759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:41:00.801775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.514 [2024-07-12 06:41:00.801789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:41:00.801804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.514 [2024-07-12 06:41:00.801818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:41:00.801833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.514 [2024-07-12 06:41:00.801846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.514 [2024-07-12 06:41:00.801862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.801875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.801891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.801904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.801920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.801933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.801950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.801964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.801979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.515 [2024-07-12 06:41:00.802145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.515 [2024-07-12 06:41:00.802174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.515 [2024-07-12 06:41:00.802204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.515 [2024-07-12 06:41:00.802264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.515 [2024-07-12 06:41:00.802293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.515 [2024-07-12 06:41:00.802470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.515 [2024-07-12 06:41:00.802537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.802950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.802981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.515 [2024-07-12 06:41:00.802997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.803013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.803027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.803043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.515 [2024-07-12 06:41:00.803056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.803072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.803085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.803101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.515 [2024-07-12 06:41:00.803117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.803133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.515 [2024-07-12 06:41:00.803148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.803164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.803178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.803194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.515 [2024-07-12 06:41:00.803207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.515 [2024-07-12 06:41:00.803223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.516 [2024-07-12 06:41:00.803237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.516 [2024-07-12 06:41:00.803296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.516 [2024-07-12 06:41:00.803465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.516 [2024-07-12 06:41:00.803756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.516 [2024-07-12 06:41:00.803815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.803963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.803979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.516 [2024-07-12 06:41:00.804004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.804036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.804065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.804101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.804141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.804171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.804201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.804231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.804261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.804290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.804320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.516 [2024-07-12 06:41:00.804349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.516 [2024-07-12 06:41:00.804379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.516 [2024-07-12 06:41:00.804394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.804408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.804534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.804563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.804686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.804715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.804745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.804774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.804969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.804986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.805059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.805089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.805152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.805182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.805242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.805309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.517 [2024-07-12 06:41:00.805339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.517 [2024-07-12 06:41:00.805546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27aa0 is same with the state(5) to be set 00:16:27.517 [2024-07-12 06:41:00.805579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:27.517 [2024-07-12 06:41:00.805590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:27.517 [2024-07-12 06:41:00.805603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36416 len:8 PRP1 0x0 PRP2 0x0 00:16:27.517 [2024-07-12 06:41:00.805617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805665] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e27aa0 was disconnected and freed. reset controller. 00:16:27.517 [2024-07-12 06:41:00.805683] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:27.517 [2024-07-12 06:41:00.805739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.517 [2024-07-12 06:41:00.805772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.517 [2024-07-12 06:41:00.805788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.517 [2024-07-12 06:41:00.805802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.518 [2024-07-12 06:41:00.805817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.518 [2024-07-12 06:41:00.805830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.518 [2024-07-12 06:41:00.805844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.518 [2024-07-12 06:41:00.805858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.518 [2024-07-12 06:41:00.805871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:27.518 [2024-07-12 06:41:00.805905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2bc80 (9): Bad file descriptor 00:16:27.518 [2024-07-12 06:41:00.808406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:27.518 [2024-07-12 06:41:00.841995] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:27.518 00:16:27.518 Latency(us) 00:16:27.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.518 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:27.518 Verification LBA range: start 0x0 length 0x4000 00:16:27.518 NVMe0n1 : 15.01 12370.79 48.32 299.89 0.00 10082.33 420.77 32648.84 00:16:27.518 =================================================================================================================== 00:16:27.518 Total : 12370.79 48.32 299.89 0.00 10082.33 420.77 32648.84 00:16:27.518 Received shutdown signal, test time was about 15.000000 seconds 00:16:27.518 00:16:27.518 Latency(us) 00:16:27.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.518 =================================================================================================================== 00:16:27.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.518 06:41:06 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:27.518 06:41:06 -- host/failover.sh@65 -- # count=3 00:16:27.518 06:41:06 -- host/failover.sh@67 -- # (( count != 3 )) 00:16:27.518 06:41:06 -- host/failover.sh@73 -- # bdevperf_pid=81875 00:16:27.518 06:41:06 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:27.518 06:41:06 -- host/failover.sh@75 -- # waitforlisten 81875 /var/tmp/bdevperf.sock 00:16:27.518 06:41:06 -- common/autotest_common.sh@819 -- # '[' -z 81875 ']' 00:16:27.518 06:41:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.518 06:41:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:27.518 06:41:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.518 06:41:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:27.518 06:41:06 -- common/autotest_common.sh@10 -- # set +x 00:16:28.084 06:41:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:28.084 06:41:07 -- common/autotest_common.sh@852 -- # return 0 00:16:28.084 06:41:07 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:28.084 [2024-07-12 06:41:07.955473] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:28.084 06:41:07 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:28.343 [2024-07-12 06:41:08.175652] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:28.343 06:41:08 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:28.601 NVMe0n1 00:16:28.602 06:41:08 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:29.168 00:16:29.168 06:41:08 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:29.427 00:16:29.427 06:41:09 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:29.427 06:41:09 -- host/failover.sh@82 -- # grep -q NVMe0 00:16:29.686 06:41:09 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:29.686 06:41:09 -- host/failover.sh@87 -- # sleep 3 00:16:32.968 06:41:12 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:32.968 06:41:12 -- host/failover.sh@88 -- # grep -q NVMe0 00:16:33.225 06:41:12 -- host/failover.sh@90 -- # run_test_pid=81956 00:16:33.225 06:41:12 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:33.225 06:41:12 -- host/failover.sh@92 -- # wait 81956 00:16:34.156 0 00:16:34.156 06:41:14 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:34.156 [2024-07-12 06:41:06.757390] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:34.156 [2024-07-12 06:41:06.757494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81875 ] 00:16:34.156 [2024-07-12 06:41:06.890530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.156 [2024-07-12 06:41:06.923526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.156 [2024-07-12 06:41:09.582340] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:34.156 [2024-07-12 06:41:09.582471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.156 [2024-07-12 06:41:09.582497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.156 [2024-07-12 06:41:09.582516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.156 [2024-07-12 06:41:09.582530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.156 [2024-07-12 06:41:09.582544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.156 [2024-07-12 06:41:09.582557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.157 [2024-07-12 06:41:09.582571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.157 [2024-07-12 06:41:09.582584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.157 [2024-07-12 06:41:09.582598] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:34.157 [2024-07-12 06:41:09.582678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:34.157 [2024-07-12 06:41:09.582712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2027c80 (9): Bad file descriptor 00:16:34.157 [2024-07-12 06:41:09.592128] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:34.157 Running I/O for 1 seconds... 00:16:34.157 00:16:34.157 Latency(us) 00:16:34.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.157 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:34.157 Verification LBA range: start 0x0 length 0x4000 00:16:34.157 NVMe0n1 : 1.01 12861.98 50.24 0.00 0.00 9900.96 1206.46 13345.51 00:16:34.157 =================================================================================================================== 00:16:34.157 Total : 12861.98 50.24 0.00 0.00 9900.96 1206.46 13345.51 00:16:34.157 06:41:14 -- host/failover.sh@95 -- # grep -q NVMe0 00:16:34.157 06:41:14 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:34.414 06:41:14 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:34.671 06:41:14 -- host/failover.sh@99 -- # grep -q NVMe0 00:16:34.671 06:41:14 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:34.929 06:41:14 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:35.187 06:41:15 -- host/failover.sh@101 -- # sleep 3 00:16:38.472 06:41:18 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:38.472 06:41:18 -- host/failover.sh@103 -- # grep -q NVMe0 00:16:38.472 06:41:18 -- host/failover.sh@108 -- # killprocess 81875 00:16:38.472 06:41:18 -- common/autotest_common.sh@926 -- # '[' -z 81875 ']' 00:16:38.472 06:41:18 -- common/autotest_common.sh@930 -- # kill -0 81875 00:16:38.472 06:41:18 -- common/autotest_common.sh@931 -- # uname 00:16:38.472 06:41:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:38.472 06:41:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81875 00:16:38.472 killing process with pid 81875 00:16:38.472 06:41:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:38.472 06:41:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:38.472 06:41:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81875' 00:16:38.472 06:41:18 -- common/autotest_common.sh@945 -- # kill 81875 00:16:38.472 06:41:18 -- common/autotest_common.sh@950 -- # wait 81875 00:16:38.730 06:41:18 -- host/failover.sh@110 -- # sync 00:16:38.730 06:41:18 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:38.990 06:41:18 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:38.990 06:41:18 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:38.990 06:41:18 -- host/failover.sh@116 -- # nvmftestfini 00:16:38.990 06:41:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:38.990 06:41:18 -- nvmf/common.sh@116 -- # sync 00:16:38.990 06:41:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:38.990 06:41:18 -- nvmf/common.sh@119 -- # set +e 00:16:38.990 06:41:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:38.990 06:41:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:38.990 rmmod nvme_tcp 00:16:38.990 rmmod nvme_fabrics 00:16:38.990 rmmod nvme_keyring 00:16:38.990 06:41:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:38.990 06:41:18 -- nvmf/common.sh@123 -- # set -e 00:16:38.990 06:41:18 -- nvmf/common.sh@124 -- # return 0 00:16:38.990 06:41:18 -- nvmf/common.sh@477 -- # '[' -n 81615 ']' 00:16:38.990 06:41:18 -- nvmf/common.sh@478 -- # killprocess 81615 00:16:38.990 06:41:18 -- common/autotest_common.sh@926 -- # '[' -z 81615 ']' 00:16:38.990 06:41:18 -- common/autotest_common.sh@930 -- # kill -0 81615 00:16:38.990 06:41:18 -- common/autotest_common.sh@931 -- # uname 00:16:38.990 06:41:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:38.990 06:41:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81615 00:16:38.990 killing process with pid 81615 00:16:38.990 06:41:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:38.990 06:41:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:38.990 06:41:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81615' 00:16:38.990 06:41:18 -- common/autotest_common.sh@945 -- # kill 81615 00:16:38.990 06:41:18 -- common/autotest_common.sh@950 -- # wait 81615 00:16:39.280 06:41:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:39.280 06:41:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:39.280 06:41:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:39.280 06:41:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.280 06:41:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:39.280 06:41:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.280 06:41:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.280 06:41:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.280 06:41:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:39.280 00:16:39.280 real 0m32.326s 00:16:39.280 user 2m5.545s 00:16:39.280 sys 0m5.580s 00:16:39.280 06:41:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.280 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:16:39.280 ************************************ 00:16:39.280 END TEST nvmf_failover 00:16:39.280 ************************************ 00:16:39.280 06:41:19 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:39.280 06:41:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:39.280 06:41:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.280 06:41:19 -- common/autotest_common.sh@10 -- # set +x 00:16:39.280 ************************************ 00:16:39.280 START TEST nvmf_discovery 00:16:39.280 ************************************ 00:16:39.280 06:41:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:39.280 * Looking for test storage... 00:16:39.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:39.280 06:41:19 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.280 06:41:19 -- nvmf/common.sh@7 -- # uname -s 00:16:39.280 06:41:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.280 06:41:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.280 06:41:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.280 06:41:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.280 06:41:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.280 06:41:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.280 06:41:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.280 06:41:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.280 06:41:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.280 06:41:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.280 06:41:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:16:39.280 06:41:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:16:39.280 06:41:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.280 06:41:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.280 06:41:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.280 06:41:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.280 06:41:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.280 06:41:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.280 06:41:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.280 06:41:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.280 06:41:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.280 06:41:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.280 06:41:19 -- paths/export.sh@5 -- # export PATH 00:16:39.280 06:41:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.280 06:41:19 -- nvmf/common.sh@46 -- # : 0 00:16:39.280 06:41:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:39.280 06:41:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:39.280 06:41:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:39.280 06:41:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.280 06:41:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.280 06:41:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:39.280 06:41:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:39.280 06:41:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:39.280 06:41:19 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:39.280 06:41:19 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:39.280 06:41:19 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:39.280 06:41:19 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:39.280 06:41:19 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:39.280 06:41:19 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:39.280 06:41:19 -- host/discovery.sh@25 -- # nvmftestinit 00:16:39.280 06:41:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:39.280 06:41:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.280 06:41:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:39.280 06:41:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:39.280 06:41:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:39.280 06:41:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.280 06:41:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.280 06:41:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.280 06:41:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:39.280 06:41:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:39.280 06:41:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:39.280 06:41:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:39.280 06:41:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:39.280 06:41:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:39.280 06:41:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.280 06:41:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.280 06:41:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:39.280 06:41:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:39.280 06:41:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:39.280 06:41:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:39.280 06:41:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:39.280 06:41:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.280 06:41:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:39.280 06:41:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:39.280 06:41:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:39.280 06:41:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:39.280 06:41:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:39.280 06:41:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:39.280 Cannot find device "nvmf_tgt_br" 00:16:39.280 06:41:19 -- nvmf/common.sh@154 -- # true 00:16:39.280 06:41:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.539 Cannot find device "nvmf_tgt_br2" 00:16:39.539 06:41:19 -- nvmf/common.sh@155 -- # true 00:16:39.539 06:41:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:39.539 06:41:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:39.539 Cannot find device "nvmf_tgt_br" 00:16:39.539 06:41:19 -- nvmf/common.sh@157 -- # true 00:16:39.539 06:41:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:39.539 Cannot find device "nvmf_tgt_br2" 00:16:39.539 06:41:19 -- nvmf/common.sh@158 -- # true 00:16:39.539 06:41:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:39.539 06:41:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:39.539 06:41:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.539 06:41:19 -- nvmf/common.sh@161 -- # true 00:16:39.539 06:41:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.539 06:41:19 -- nvmf/common.sh@162 -- # true 00:16:39.539 06:41:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:39.539 06:41:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:39.539 06:41:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:39.539 06:41:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:39.539 06:41:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:39.539 06:41:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:39.539 06:41:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:39.539 06:41:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:39.539 06:41:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:39.539 06:41:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:39.539 06:41:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:39.539 06:41:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:39.539 06:41:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:39.539 06:41:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:39.539 06:41:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:39.539 06:41:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:39.539 06:41:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:39.539 06:41:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:39.539 06:41:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:39.539 06:41:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:39.539 06:41:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:39.539 06:41:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:39.539 06:41:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:39.539 06:41:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:39.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:16:39.798 00:16:39.798 --- 10.0.0.2 ping statistics --- 00:16:39.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.798 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:39.798 06:41:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:39.798 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:39.798 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:16:39.798 00:16:39.798 --- 10.0.0.3 ping statistics --- 00:16:39.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.798 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:39.798 06:41:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:39.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:39.798 00:16:39.798 --- 10.0.0.1 ping statistics --- 00:16:39.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.798 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:39.798 06:41:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.798 06:41:19 -- nvmf/common.sh@421 -- # return 0 00:16:39.798 06:41:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:39.798 06:41:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.798 06:41:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:39.798 06:41:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:39.798 06:41:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.798 06:41:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:39.798 06:41:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:39.798 06:41:19 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:39.798 06:41:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:39.798 06:41:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:39.798 06:41:19 -- common/autotest_common.sh@10 -- # set +x 00:16:39.798 06:41:19 -- nvmf/common.sh@469 -- # nvmfpid=82214 00:16:39.798 06:41:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:39.798 06:41:19 -- nvmf/common.sh@470 -- # waitforlisten 82214 00:16:39.798 06:41:19 -- common/autotest_common.sh@819 -- # '[' -z 82214 ']' 00:16:39.798 06:41:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.798 06:41:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:39.798 06:41:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.798 06:41:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:39.798 06:41:19 -- common/autotest_common.sh@10 -- # set +x 00:16:39.798 [2024-07-12 06:41:19.551743] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:39.798 [2024-07-12 06:41:19.551883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.798 [2024-07-12 06:41:19.694751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.057 [2024-07-12 06:41:19.740110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:40.057 [2024-07-12 06:41:19.740272] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.057 [2024-07-12 06:41:19.740289] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.057 [2024-07-12 06:41:19.740300] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.057 [2024-07-12 06:41:19.740344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.994 06:41:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:40.994 06:41:20 -- common/autotest_common.sh@852 -- # return 0 00:16:40.994 06:41:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:40.994 06:41:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:40.994 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.994 06:41:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.994 06:41:20 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.994 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.994 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.994 [2024-07-12 06:41:20.645584] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.994 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.994 06:41:20 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:40.994 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.994 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.994 [2024-07-12 06:41:20.653675] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:40.994 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.994 06:41:20 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:40.994 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.994 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.994 null0 00:16:40.994 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.994 06:41:20 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:40.994 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.994 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.994 null1 00:16:40.994 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.994 06:41:20 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:40.994 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.994 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.994 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.994 06:41:20 -- host/discovery.sh@45 -- # hostpid=82246 00:16:40.995 06:41:20 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:40.995 06:41:20 -- host/discovery.sh@46 -- # waitforlisten 82246 /tmp/host.sock 00:16:40.995 06:41:20 -- common/autotest_common.sh@819 -- # '[' -z 82246 ']' 00:16:40.995 06:41:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:16:40.995 06:41:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:40.995 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:40.995 06:41:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:40.995 06:41:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:40.995 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.995 [2024-07-12 06:41:20.737050] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:40.995 [2024-07-12 06:41:20.737173] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82246 ] 00:16:40.995 [2024-07-12 06:41:20.879160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.254 [2024-07-12 06:41:20.920339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:41.254 [2024-07-12 06:41:20.920519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.191 06:41:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:42.191 06:41:21 -- common/autotest_common.sh@852 -- # return 0 00:16:42.191 06:41:21 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.191 06:41:21 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:42.191 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.191 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:42.191 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.191 06:41:21 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:42.191 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.191 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:42.191 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.191 06:41:21 -- host/discovery.sh@72 -- # notify_id=0 00:16:42.191 06:41:21 -- host/discovery.sh@78 -- # get_subsystem_names 00:16:42.191 06:41:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:42.191 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.191 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:42.191 06:41:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:42.191 06:41:21 -- host/discovery.sh@59 -- # sort 00:16:42.191 06:41:21 -- host/discovery.sh@59 -- # xargs 00:16:42.191 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.191 06:41:21 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:16:42.191 06:41:21 -- host/discovery.sh@79 -- # get_bdev_list 00:16:42.191 06:41:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:42.191 06:41:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.191 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.191 06:41:21 -- host/discovery.sh@55 -- # sort 00:16:42.191 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:42.191 06:41:21 -- host/discovery.sh@55 -- # xargs 00:16:42.191 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.191 06:41:21 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:16:42.191 06:41:21 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:42.191 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.191 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:42.191 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.191 06:41:21 -- host/discovery.sh@82 -- # get_subsystem_names 00:16:42.191 06:41:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:42.191 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.191 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:42.191 06:41:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:42.191 06:41:21 -- host/discovery.sh@59 -- # sort 00:16:42.191 06:41:21 -- host/discovery.sh@59 -- # xargs 00:16:42.191 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.191 06:41:21 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:16:42.191 06:41:21 -- host/discovery.sh@83 -- # get_bdev_list 00:16:42.191 06:41:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:42.191 06:41:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.191 06:41:21 -- host/discovery.sh@55 -- # sort 00:16:42.191 06:41:21 -- host/discovery.sh@55 -- # xargs 00:16:42.191 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.191 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:42.191 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.191 06:41:22 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:42.191 06:41:22 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:42.191 06:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.191 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:16:42.191 06:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.191 06:41:22 -- host/discovery.sh@86 -- # get_subsystem_names 00:16:42.191 06:41:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:42.191 06:41:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:42.191 06:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.191 06:41:22 -- host/discovery.sh@59 -- # sort 00:16:42.191 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:16:42.191 06:41:22 -- host/discovery.sh@59 -- # xargs 00:16:42.191 06:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.191 06:41:22 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:16:42.191 06:41:22 -- host/discovery.sh@87 -- # get_bdev_list 00:16:42.191 06:41:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:42.191 06:41:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.191 06:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.191 06:41:22 -- host/discovery.sh@55 -- # sort 00:16:42.191 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:16:42.191 06:41:22 -- host/discovery.sh@55 -- # xargs 00:16:42.191 06:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.451 06:41:22 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:42.451 06:41:22 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:42.451 06:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.451 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:16:42.451 [2024-07-12 06:41:22.154461] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.451 06:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.451 06:41:22 -- host/discovery.sh@92 -- # get_subsystem_names 00:16:42.451 06:41:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:42.451 06:41:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:42.451 06:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.451 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:16:42.451 06:41:22 -- host/discovery.sh@59 -- # sort 00:16:42.451 06:41:22 -- host/discovery.sh@59 -- # xargs 00:16:42.451 06:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.451 06:41:22 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:42.451 06:41:22 -- host/discovery.sh@93 -- # get_bdev_list 00:16:42.451 06:41:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.451 06:41:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:42.451 06:41:22 -- host/discovery.sh@55 -- # sort 00:16:42.451 06:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.451 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:16:42.451 06:41:22 -- host/discovery.sh@55 -- # xargs 00:16:42.451 06:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.451 06:41:22 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:16:42.451 06:41:22 -- host/discovery.sh@94 -- # get_notification_count 00:16:42.451 06:41:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:42.451 06:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.451 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:16:42.451 06:41:22 -- host/discovery.sh@74 -- # jq '. | length' 00:16:42.451 06:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.451 06:41:22 -- host/discovery.sh@74 -- # notification_count=0 00:16:42.451 06:41:22 -- host/discovery.sh@75 -- # notify_id=0 00:16:42.451 06:41:22 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:16:42.451 06:41:22 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:42.451 06:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.451 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:16:42.451 06:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.451 06:41:22 -- host/discovery.sh@100 -- # sleep 1 00:16:43.019 [2024-07-12 06:41:22.779196] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:43.019 [2024-07-12 06:41:22.779253] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:43.019 [2024-07-12 06:41:22.779273] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:43.019 [2024-07-12 06:41:22.785219] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:43.019 [2024-07-12 06:41:22.841563] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:43.019 [2024-07-12 06:41:22.841610] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:43.590 06:41:23 -- host/discovery.sh@101 -- # get_subsystem_names 00:16:43.590 06:41:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:43.590 06:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.590 06:41:23 -- common/autotest_common.sh@10 -- # set +x 00:16:43.590 06:41:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:43.590 06:41:23 -- host/discovery.sh@59 -- # sort 00:16:43.590 06:41:23 -- host/discovery.sh@59 -- # xargs 00:16:43.590 06:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.590 06:41:23 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.590 06:41:23 -- host/discovery.sh@102 -- # get_bdev_list 00:16:43.590 06:41:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.590 06:41:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:43.590 06:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.590 06:41:23 -- host/discovery.sh@55 -- # xargs 00:16:43.590 06:41:23 -- host/discovery.sh@55 -- # sort 00:16:43.590 06:41:23 -- common/autotest_common.sh@10 -- # set +x 00:16:43.590 06:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.590 06:41:23 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:43.590 06:41:23 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:16:43.590 06:41:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:43.590 06:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.590 06:41:23 -- common/autotest_common.sh@10 -- # set +x 00:16:43.590 06:41:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:43.590 06:41:23 -- host/discovery.sh@63 -- # sort -n 00:16:43.590 06:41:23 -- host/discovery.sh@63 -- # xargs 00:16:43.590 06:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.848 06:41:23 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:16:43.848 06:41:23 -- host/discovery.sh@104 -- # get_notification_count 00:16:43.848 06:41:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:43.848 06:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.848 06:41:23 -- common/autotest_common.sh@10 -- # set +x 00:16:43.848 06:41:23 -- host/discovery.sh@74 -- # jq '. | length' 00:16:43.848 06:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.848 06:41:23 -- host/discovery.sh@74 -- # notification_count=1 00:16:43.848 06:41:23 -- host/discovery.sh@75 -- # notify_id=1 00:16:43.848 06:41:23 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:16:43.848 06:41:23 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:43.848 06:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.848 06:41:23 -- common/autotest_common.sh@10 -- # set +x 00:16:43.848 06:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.848 06:41:23 -- host/discovery.sh@109 -- # sleep 1 00:16:44.785 06:41:24 -- host/discovery.sh@110 -- # get_bdev_list 00:16:44.785 06:41:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.785 06:41:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:44.785 06:41:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:44.785 06:41:24 -- common/autotest_common.sh@10 -- # set +x 00:16:44.785 06:41:24 -- host/discovery.sh@55 -- # sort 00:16:44.785 06:41:24 -- host/discovery.sh@55 -- # xargs 00:16:44.785 06:41:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:44.785 06:41:24 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:44.785 06:41:24 -- host/discovery.sh@111 -- # get_notification_count 00:16:44.785 06:41:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:44.785 06:41:24 -- host/discovery.sh@74 -- # jq '. | length' 00:16:44.785 06:41:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:44.785 06:41:24 -- common/autotest_common.sh@10 -- # set +x 00:16:44.785 06:41:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:44.785 06:41:24 -- host/discovery.sh@74 -- # notification_count=1 00:16:44.785 06:41:24 -- host/discovery.sh@75 -- # notify_id=2 00:16:44.785 06:41:24 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:16:44.785 06:41:24 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:44.785 06:41:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:44.785 06:41:24 -- common/autotest_common.sh@10 -- # set +x 00:16:44.785 [2024-07-12 06:41:24.702590] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:44.785 [2024-07-12 06:41:24.703715] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:44.785 [2024-07-12 06:41:24.703757] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:44.785 06:41:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:44.785 06:41:24 -- host/discovery.sh@117 -- # sleep 1 00:16:45.043 [2024-07-12 06:41:24.709708] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:45.044 [2024-07-12 06:41:24.772133] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:45.044 [2024-07-12 06:41:24.772171] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:45.044 [2024-07-12 06:41:24.772180] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:45.981 06:41:25 -- host/discovery.sh@118 -- # get_subsystem_names 00:16:45.981 06:41:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:45.981 06:41:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:45.981 06:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:45.981 06:41:25 -- common/autotest_common.sh@10 -- # set +x 00:16:45.981 06:41:25 -- host/discovery.sh@59 -- # sort 00:16:45.981 06:41:25 -- host/discovery.sh@59 -- # xargs 00:16:45.981 06:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:45.981 06:41:25 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.981 06:41:25 -- host/discovery.sh@119 -- # get_bdev_list 00:16:45.981 06:41:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:45.981 06:41:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:45.981 06:41:25 -- host/discovery.sh@55 -- # sort 00:16:45.981 06:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:45.981 06:41:25 -- common/autotest_common.sh@10 -- # set +x 00:16:45.981 06:41:25 -- host/discovery.sh@55 -- # xargs 00:16:45.981 06:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:45.981 06:41:25 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:45.981 06:41:25 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:16:45.981 06:41:25 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:45.981 06:41:25 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:45.981 06:41:25 -- host/discovery.sh@63 -- # xargs 00:16:45.981 06:41:25 -- host/discovery.sh@63 -- # sort -n 00:16:45.981 06:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:45.981 06:41:25 -- common/autotest_common.sh@10 -- # set +x 00:16:45.981 06:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:45.981 06:41:25 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:45.981 06:41:25 -- host/discovery.sh@121 -- # get_notification_count 00:16:45.981 06:41:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:45.981 06:41:25 -- host/discovery.sh@74 -- # jq '. | length' 00:16:45.981 06:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:45.981 06:41:25 -- common/autotest_common.sh@10 -- # set +x 00:16:45.981 06:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:46.241 06:41:25 -- host/discovery.sh@74 -- # notification_count=0 00:16:46.241 06:41:25 -- host/discovery.sh@75 -- # notify_id=2 00:16:46.241 06:41:25 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:16:46.241 06:41:25 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:46.241 06:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:46.241 06:41:25 -- common/autotest_common.sh@10 -- # set +x 00:16:46.241 [2024-07-12 06:41:25.941750] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:46.241 [2024-07-12 06:41:25.941805] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:46.241 06:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:46.241 06:41:25 -- host/discovery.sh@127 -- # sleep 1 00:16:46.241 [2024-07-12 06:41:25.947743] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:46.241 [2024-07-12 06:41:25.947795] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:46.241 [2024-07-12 06:41:25.947941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.241 [2024-07-12 06:41:25.947988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.241 [2024-07-12 06:41:25.948004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.241 [2024-07-12 06:41:25.948013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.241 [2024-07-12 06:41:25.948023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.241 [2024-07-12 06:41:25.948032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.241 [2024-07-12 06:41:25.948044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.241 [2024-07-12 06:41:25.948053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.241 [2024-07-12 06:41:25.948062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dd080 is same with the state(5) to be set 00:16:47.178 06:41:26 -- host/discovery.sh@128 -- # get_subsystem_names 00:16:47.178 06:41:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:47.178 06:41:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:47.178 06:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.178 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:16:47.178 06:41:26 -- host/discovery.sh@59 -- # sort 00:16:47.178 06:41:26 -- host/discovery.sh@59 -- # xargs 00:16:47.178 06:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.178 06:41:27 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.178 06:41:27 -- host/discovery.sh@129 -- # get_bdev_list 00:16:47.178 06:41:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:47.178 06:41:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:47.178 06:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.178 06:41:27 -- host/discovery.sh@55 -- # sort 00:16:47.178 06:41:27 -- common/autotest_common.sh@10 -- # set +x 00:16:47.178 06:41:27 -- host/discovery.sh@55 -- # xargs 00:16:47.178 06:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.178 06:41:27 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:47.178 06:41:27 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:16:47.178 06:41:27 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:47.178 06:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.178 06:41:27 -- common/autotest_common.sh@10 -- # set +x 00:16:47.178 06:41:27 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:47.178 06:41:27 -- host/discovery.sh@63 -- # sort -n 00:16:47.178 06:41:27 -- host/discovery.sh@63 -- # xargs 00:16:47.178 06:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.436 06:41:27 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:16:47.436 06:41:27 -- host/discovery.sh@131 -- # get_notification_count 00:16:47.436 06:41:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:47.436 06:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.436 06:41:27 -- common/autotest_common.sh@10 -- # set +x 00:16:47.436 06:41:27 -- host/discovery.sh@74 -- # jq '. | length' 00:16:47.436 06:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.436 06:41:27 -- host/discovery.sh@74 -- # notification_count=0 00:16:47.436 06:41:27 -- host/discovery.sh@75 -- # notify_id=2 00:16:47.436 06:41:27 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:16:47.436 06:41:27 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:47.436 06:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.436 06:41:27 -- common/autotest_common.sh@10 -- # set +x 00:16:47.436 06:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.436 06:41:27 -- host/discovery.sh@135 -- # sleep 1 00:16:48.372 06:41:28 -- host/discovery.sh@136 -- # get_subsystem_names 00:16:48.372 06:41:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:48.372 06:41:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:48.372 06:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.372 06:41:28 -- host/discovery.sh@59 -- # sort 00:16:48.372 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:16:48.372 06:41:28 -- host/discovery.sh@59 -- # xargs 00:16:48.372 06:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.372 06:41:28 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:16:48.372 06:41:28 -- host/discovery.sh@137 -- # get_bdev_list 00:16:48.372 06:41:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:48.372 06:41:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:48.372 06:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.372 06:41:28 -- host/discovery.sh@55 -- # xargs 00:16:48.372 06:41:28 -- host/discovery.sh@55 -- # sort 00:16:48.372 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:16:48.372 06:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.631 06:41:28 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:16:48.631 06:41:28 -- host/discovery.sh@138 -- # get_notification_count 00:16:48.631 06:41:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:48.631 06:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.631 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:16:48.631 06:41:28 -- host/discovery.sh@74 -- # jq '. | length' 00:16:48.631 06:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.631 06:41:28 -- host/discovery.sh@74 -- # notification_count=2 00:16:48.631 06:41:28 -- host/discovery.sh@75 -- # notify_id=4 00:16:48.631 06:41:28 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:16:48.631 06:41:28 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:48.631 06:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.631 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:16:49.568 [2024-07-12 06:41:29.392653] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:49.568 [2024-07-12 06:41:29.392687] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:49.568 [2024-07-12 06:41:29.392708] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:49.568 [2024-07-12 06:41:29.398692] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:49.568 [2024-07-12 06:41:29.458168] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:49.568 [2024-07-12 06:41:29.458227] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:49.568 06:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.568 06:41:29 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:49.568 06:41:29 -- common/autotest_common.sh@640 -- # local es=0 00:16:49.568 06:41:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:49.568 06:41:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:49.568 06:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.568 06:41:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:49.568 06:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.568 06:41:29 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:49.568 06:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.568 06:41:29 -- common/autotest_common.sh@10 -- # set +x 00:16:49.568 request: 00:16:49.568 { 00:16:49.568 "name": "nvme", 00:16:49.568 "trtype": "tcp", 00:16:49.568 "traddr": "10.0.0.2", 00:16:49.568 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:49.568 "adrfam": "ipv4", 00:16:49.568 "trsvcid": "8009", 00:16:49.568 "wait_for_attach": true, 00:16:49.568 "method": "bdev_nvme_start_discovery", 00:16:49.568 "req_id": 1 00:16:49.568 } 00:16:49.568 Got JSON-RPC error response 00:16:49.568 response: 00:16:49.568 { 00:16:49.568 "code": -17, 00:16:49.568 "message": "File exists" 00:16:49.568 } 00:16:49.568 06:41:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:49.568 06:41:29 -- common/autotest_common.sh@643 -- # es=1 00:16:49.568 06:41:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:49.568 06:41:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:49.568 06:41:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:49.568 06:41:29 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:16:49.568 06:41:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:49.568 06:41:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:49.568 06:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.568 06:41:29 -- common/autotest_common.sh@10 -- # set +x 00:16:49.568 06:41:29 -- host/discovery.sh@67 -- # sort 00:16:49.568 06:41:29 -- host/discovery.sh@67 -- # xargs 00:16:49.826 06:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.826 06:41:29 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:16:49.826 06:41:29 -- host/discovery.sh@147 -- # get_bdev_list 00:16:49.826 06:41:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:49.826 06:41:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:49.826 06:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.826 06:41:29 -- common/autotest_common.sh@10 -- # set +x 00:16:49.826 06:41:29 -- host/discovery.sh@55 -- # xargs 00:16:49.826 06:41:29 -- host/discovery.sh@55 -- # sort 00:16:49.826 06:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.826 06:41:29 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:49.826 06:41:29 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:49.826 06:41:29 -- common/autotest_common.sh@640 -- # local es=0 00:16:49.826 06:41:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:49.826 06:41:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:49.826 06:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.826 06:41:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:49.826 06:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.826 06:41:29 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:49.826 06:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.826 06:41:29 -- common/autotest_common.sh@10 -- # set +x 00:16:49.826 request: 00:16:49.826 { 00:16:49.826 "name": "nvme_second", 00:16:49.826 "trtype": "tcp", 00:16:49.826 "traddr": "10.0.0.2", 00:16:49.826 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:49.826 "adrfam": "ipv4", 00:16:49.826 "trsvcid": "8009", 00:16:49.826 "wait_for_attach": true, 00:16:49.826 "method": "bdev_nvme_start_discovery", 00:16:49.826 "req_id": 1 00:16:49.826 } 00:16:49.826 Got JSON-RPC error response 00:16:49.826 response: 00:16:49.826 { 00:16:49.826 "code": -17, 00:16:49.826 "message": "File exists" 00:16:49.826 } 00:16:49.826 06:41:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:49.826 06:41:29 -- common/autotest_common.sh@643 -- # es=1 00:16:49.826 06:41:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:49.826 06:41:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:49.826 06:41:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:49.826 06:41:29 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:16:49.826 06:41:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:49.826 06:41:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:49.827 06:41:29 -- host/discovery.sh@67 -- # sort 00:16:49.827 06:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.827 06:41:29 -- common/autotest_common.sh@10 -- # set +x 00:16:49.827 06:41:29 -- host/discovery.sh@67 -- # xargs 00:16:49.827 06:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.827 06:41:29 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:16:49.827 06:41:29 -- host/discovery.sh@153 -- # get_bdev_list 00:16:49.827 06:41:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:49.827 06:41:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:49.827 06:41:29 -- host/discovery.sh@55 -- # sort 00:16:49.827 06:41:29 -- host/discovery.sh@55 -- # xargs 00:16:49.827 06:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.827 06:41:29 -- common/autotest_common.sh@10 -- # set +x 00:16:49.827 06:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.827 06:41:29 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:49.827 06:41:29 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:49.827 06:41:29 -- common/autotest_common.sh@640 -- # local es=0 00:16:49.827 06:41:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:49.827 06:41:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:49.827 06:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.827 06:41:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:49.827 06:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.827 06:41:29 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:49.827 06:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.827 06:41:29 -- common/autotest_common.sh@10 -- # set +x 00:16:51.202 [2024-07-12 06:41:30.748172] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:51.202 [2024-07-12 06:41:30.748367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:51.202 [2024-07-12 06:41:30.748417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:51.202 [2024-07-12 06:41:30.748436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d9ea0 with addr=10.0.0.2, port=8010 00:16:51.202 [2024-07-12 06:41:30.748454] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:51.202 [2024-07-12 06:41:30.748464] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:51.202 [2024-07-12 06:41:30.748473] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:52.138 [2024-07-12 06:41:31.748181] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:52.138 [2024-07-12 06:41:31.748326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:52.138 [2024-07-12 06:41:31.748387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:52.138 [2024-07-12 06:41:31.748421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b51f0 with addr=10.0.0.2, port=8010 00:16:52.138 [2024-07-12 06:41:31.748439] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:52.138 [2024-07-12 06:41:31.748449] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:52.138 [2024-07-12 06:41:31.748458] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:53.081 [2024-07-12 06:41:32.748017] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:53.081 request: 00:16:53.081 { 00:16:53.081 "name": "nvme_second", 00:16:53.081 "trtype": "tcp", 00:16:53.081 "traddr": "10.0.0.2", 00:16:53.081 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:53.081 "adrfam": "ipv4", 00:16:53.081 "trsvcid": "8010", 00:16:53.081 "attach_timeout_ms": 3000, 00:16:53.081 "method": "bdev_nvme_start_discovery", 00:16:53.081 "req_id": 1 00:16:53.081 } 00:16:53.081 Got JSON-RPC error response 00:16:53.081 response: 00:16:53.081 { 00:16:53.081 "code": -110, 00:16:53.081 "message": "Connection timed out" 00:16:53.081 } 00:16:53.081 06:41:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:53.081 06:41:32 -- common/autotest_common.sh@643 -- # es=1 00:16:53.081 06:41:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:53.081 06:41:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:53.081 06:41:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:53.081 06:41:32 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:16:53.081 06:41:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:53.081 06:41:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:53.081 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.081 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:16:53.081 06:41:32 -- host/discovery.sh@67 -- # sort 00:16:53.081 06:41:32 -- host/discovery.sh@67 -- # xargs 00:16:53.081 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.081 06:41:32 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:16:53.081 06:41:32 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:16:53.081 06:41:32 -- host/discovery.sh@162 -- # kill 82246 00:16:53.081 06:41:32 -- host/discovery.sh@163 -- # nvmftestfini 00:16:53.081 06:41:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:53.081 06:41:32 -- nvmf/common.sh@116 -- # sync 00:16:53.081 06:41:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:53.081 06:41:32 -- nvmf/common.sh@119 -- # set +e 00:16:53.081 06:41:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:53.081 06:41:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:53.081 rmmod nvme_tcp 00:16:53.081 rmmod nvme_fabrics 00:16:53.081 rmmod nvme_keyring 00:16:53.081 06:41:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:53.081 06:41:32 -- nvmf/common.sh@123 -- # set -e 00:16:53.081 06:41:32 -- nvmf/common.sh@124 -- # return 0 00:16:53.081 06:41:32 -- nvmf/common.sh@477 -- # '[' -n 82214 ']' 00:16:53.081 06:41:32 -- nvmf/common.sh@478 -- # killprocess 82214 00:16:53.081 06:41:32 -- common/autotest_common.sh@926 -- # '[' -z 82214 ']' 00:16:53.081 06:41:32 -- common/autotest_common.sh@930 -- # kill -0 82214 00:16:53.081 06:41:32 -- common/autotest_common.sh@931 -- # uname 00:16:53.081 06:41:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:53.081 06:41:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82214 00:16:53.081 06:41:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:53.081 06:41:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:53.081 killing process with pid 82214 00:16:53.081 06:41:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82214' 00:16:53.081 06:41:32 -- common/autotest_common.sh@945 -- # kill 82214 00:16:53.081 06:41:32 -- common/autotest_common.sh@950 -- # wait 82214 00:16:53.339 06:41:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:53.339 06:41:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:53.339 06:41:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:53.339 06:41:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.339 06:41:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:53.339 06:41:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.339 06:41:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.339 06:41:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.339 06:41:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:53.339 00:16:53.339 real 0m14.122s 00:16:53.339 user 0m27.111s 00:16:53.339 sys 0m2.333s 00:16:53.339 06:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.339 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:16:53.339 ************************************ 00:16:53.339 END TEST nvmf_discovery 00:16:53.339 ************************************ 00:16:53.339 06:41:33 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:53.339 06:41:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:53.339 06:41:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:53.339 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:16:53.339 ************************************ 00:16:53.339 START TEST nvmf_discovery_remove_ifc 00:16:53.339 ************************************ 00:16:53.339 06:41:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:53.597 * Looking for test storage... 00:16:53.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:53.598 06:41:33 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:53.598 06:41:33 -- nvmf/common.sh@7 -- # uname -s 00:16:53.598 06:41:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.598 06:41:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.598 06:41:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.598 06:41:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.598 06:41:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.598 06:41:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.598 06:41:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.598 06:41:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.598 06:41:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.598 06:41:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.598 06:41:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:16:53.598 06:41:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:16:53.598 06:41:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.598 06:41:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.598 06:41:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:53.598 06:41:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:53.598 06:41:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.598 06:41:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.598 06:41:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.598 06:41:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.598 06:41:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.598 06:41:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.598 06:41:33 -- paths/export.sh@5 -- # export PATH 00:16:53.598 06:41:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.598 06:41:33 -- nvmf/common.sh@46 -- # : 0 00:16:53.598 06:41:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:53.598 06:41:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:53.598 06:41:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:53.598 06:41:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.598 06:41:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.598 06:41:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:53.598 06:41:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:53.598 06:41:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:53.598 06:41:33 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:53.598 06:41:33 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:53.598 06:41:33 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:53.598 06:41:33 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:53.598 06:41:33 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:53.598 06:41:33 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:53.598 06:41:33 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:53.598 06:41:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:53.598 06:41:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.598 06:41:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:53.598 06:41:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:53.598 06:41:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:53.598 06:41:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.598 06:41:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.598 06:41:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.598 06:41:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:53.598 06:41:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:53.598 06:41:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:53.598 06:41:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:53.598 06:41:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:53.598 06:41:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:53.598 06:41:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.598 06:41:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.598 06:41:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:53.598 06:41:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:53.598 06:41:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:53.598 06:41:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:53.598 06:41:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:53.598 06:41:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.598 06:41:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:53.598 06:41:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:53.598 06:41:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:53.598 06:41:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:53.598 06:41:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:53.598 06:41:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:53.598 Cannot find device "nvmf_tgt_br" 00:16:53.598 06:41:33 -- nvmf/common.sh@154 -- # true 00:16:53.598 06:41:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:53.598 Cannot find device "nvmf_tgt_br2" 00:16:53.598 06:41:33 -- nvmf/common.sh@155 -- # true 00:16:53.598 06:41:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:53.598 06:41:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:53.598 Cannot find device "nvmf_tgt_br" 00:16:53.598 06:41:33 -- nvmf/common.sh@157 -- # true 00:16:53.598 06:41:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:53.598 Cannot find device "nvmf_tgt_br2" 00:16:53.598 06:41:33 -- nvmf/common.sh@158 -- # true 00:16:53.598 06:41:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:53.598 06:41:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:53.598 06:41:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:53.598 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.598 06:41:33 -- nvmf/common.sh@161 -- # true 00:16:53.598 06:41:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:53.598 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.598 06:41:33 -- nvmf/common.sh@162 -- # true 00:16:53.598 06:41:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:53.598 06:41:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:53.598 06:41:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:53.598 06:41:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:53.598 06:41:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:53.598 06:41:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:53.598 06:41:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:53.598 06:41:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:53.857 06:41:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:53.857 06:41:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:53.857 06:41:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:53.857 06:41:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:53.857 06:41:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:53.857 06:41:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:53.857 06:41:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:53.857 06:41:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:53.857 06:41:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:53.857 06:41:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:53.857 06:41:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:53.857 06:41:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:53.857 06:41:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:53.857 06:41:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:53.857 06:41:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:53.857 06:41:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:53.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:53.857 00:16:53.857 --- 10.0.0.2 ping statistics --- 00:16:53.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.857 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:53.857 06:41:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:53.857 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:53.857 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:53.857 00:16:53.857 --- 10.0.0.3 ping statistics --- 00:16:53.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.857 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:53.857 06:41:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:53.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:53.857 00:16:53.857 --- 10.0.0.1 ping statistics --- 00:16:53.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.857 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:53.857 06:41:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.857 06:41:33 -- nvmf/common.sh@421 -- # return 0 00:16:53.857 06:41:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:53.857 06:41:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.857 06:41:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:53.857 06:41:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:53.857 06:41:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.857 06:41:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:53.857 06:41:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:53.857 06:41:33 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:53.857 06:41:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:53.857 06:41:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:53.857 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:16:53.857 06:41:33 -- nvmf/common.sh@469 -- # nvmfpid=82745 00:16:53.857 06:41:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:53.857 06:41:33 -- nvmf/common.sh@470 -- # waitforlisten 82745 00:16:53.857 06:41:33 -- common/autotest_common.sh@819 -- # '[' -z 82745 ']' 00:16:53.857 06:41:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.857 06:41:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.857 06:41:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.857 06:41:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.857 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:16:53.857 [2024-07-12 06:41:33.737060] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:53.857 [2024-07-12 06:41:33.737190] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.116 [2024-07-12 06:41:33.879771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.116 [2024-07-12 06:41:33.915544] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:54.116 [2024-07-12 06:41:33.915720] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.116 [2024-07-12 06:41:33.915733] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.116 [2024-07-12 06:41:33.915741] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.116 [2024-07-12 06:41:33.915764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.052 06:41:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:55.052 06:41:34 -- common/autotest_common.sh@852 -- # return 0 00:16:55.052 06:41:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:55.052 06:41:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:55.052 06:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:55.052 06:41:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.052 06:41:34 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:55.052 06:41:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.052 06:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:55.052 [2024-07-12 06:41:34.789635] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.052 [2024-07-12 06:41:34.797862] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:55.052 null0 00:16:55.052 [2024-07-12 06:41:34.829804] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.052 06:41:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.052 06:41:34 -- host/discovery_remove_ifc.sh@59 -- # hostpid=82777 00:16:55.052 06:41:34 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:55.052 06:41:34 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 82777 /tmp/host.sock 00:16:55.052 06:41:34 -- common/autotest_common.sh@819 -- # '[' -z 82777 ']' 00:16:55.052 06:41:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:16:55.053 06:41:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:55.053 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:55.053 06:41:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:55.053 06:41:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:55.053 06:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:55.053 [2024-07-12 06:41:34.902598] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:55.053 [2024-07-12 06:41:34.902677] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82777 ] 00:16:55.312 [2024-07-12 06:41:35.043234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.312 [2024-07-12 06:41:35.085436] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:55.312 [2024-07-12 06:41:35.085612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.312 06:41:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:55.312 06:41:35 -- common/autotest_common.sh@852 -- # return 0 00:16:55.312 06:41:35 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.312 06:41:35 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:55.312 06:41:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.312 06:41:35 -- common/autotest_common.sh@10 -- # set +x 00:16:55.312 06:41:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.312 06:41:35 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:55.312 06:41:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.312 06:41:35 -- common/autotest_common.sh@10 -- # set +x 00:16:55.312 06:41:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.312 06:41:35 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:55.312 06:41:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.312 06:41:35 -- common/autotest_common.sh@10 -- # set +x 00:16:56.689 [2024-07-12 06:41:36.233319] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:56.689 [2024-07-12 06:41:36.233354] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:56.689 [2024-07-12 06:41:36.233374] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:56.689 [2024-07-12 06:41:36.239417] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:56.689 [2024-07-12 06:41:36.295775] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:56.689 [2024-07-12 06:41:36.295882] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:56.689 [2024-07-12 06:41:36.295936] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:56.689 [2024-07-12 06:41:36.295984] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:56.689 [2024-07-12 06:41:36.296008] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:56.689 06:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.689 [2024-07-12 06:41:36.301589] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18e2660 was disconnected and freed. delete nvme_qpair. 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.689 06:41:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.689 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.689 06:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.689 06:41:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.689 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:16:56.689 06:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:56.689 06:41:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:57.626 06:41:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.626 06:41:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.626 06:41:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.626 06:41:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.626 06:41:37 -- common/autotest_common.sh@10 -- # set +x 00:16:57.626 06:41:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.626 06:41:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.626 06:41:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.626 06:41:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:57.626 06:41:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:59.003 06:41:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:59.003 06:41:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.003 06:41:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.003 06:41:38 -- common/autotest_common.sh@10 -- # set +x 00:16:59.003 06:41:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:59.003 06:41:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:59.003 06:41:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:59.003 06:41:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.003 06:41:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:59.003 06:41:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:59.939 06:41:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:59.939 06:41:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.939 06:41:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:59.939 06:41:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.939 06:41:39 -- common/autotest_common.sh@10 -- # set +x 00:16:59.939 06:41:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:59.939 06:41:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:59.939 06:41:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.939 06:41:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:59.939 06:41:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:00.873 06:41:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:00.873 06:41:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:00.873 06:41:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:00.873 06:41:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.873 06:41:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:00.873 06:41:40 -- common/autotest_common.sh@10 -- # set +x 00:17:00.873 06:41:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:00.873 06:41:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.873 06:41:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:00.873 06:41:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:01.808 06:41:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:01.808 06:41:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.808 06:41:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:01.808 06:41:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:01.808 06:41:41 -- common/autotest_common.sh@10 -- # set +x 00:17:01.808 06:41:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:01.808 06:41:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:01.808 06:41:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:01.808 [2024-07-12 06:41:41.724023] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:01.808 [2024-07-12 06:41:41.724101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.808 [2024-07-12 06:41:41.724116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.808 [2024-07-12 06:41:41.724129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.808 [2024-07-12 06:41:41.724138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.808 [2024-07-12 06:41:41.724148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.808 [2024-07-12 06:41:41.724157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.808 [2024-07-12 06:41:41.724166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.808 [2024-07-12 06:41:41.724175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.808 [2024-07-12 06:41:41.724185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.808 [2024-07-12 06:41:41.724194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.808 [2024-07-12 06:41:41.724202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a6cf0 is same with the state(5) to be set 00:17:02.067 [2024-07-12 06:41:41.734031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a6cf0 (9): Bad file descriptor 00:17:02.067 06:41:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:02.067 06:41:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:02.067 [2024-07-12 06:41:41.744052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:03.002 06:41:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:03.002 06:41:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:03.002 06:41:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:03.002 06:41:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.002 06:41:42 -- common/autotest_common.sh@10 -- # set +x 00:17:03.002 06:41:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:03.002 06:41:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:03.002 [2024-07-12 06:41:42.757998] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:03.935 [2024-07-12 06:41:43.779060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:05.310 [2024-07-12 06:41:44.803058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:05.310 [2024-07-12 06:41:44.803168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a6cf0 with addr=10.0.0.2, port=4420 00:17:05.310 [2024-07-12 06:41:44.803193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a6cf0 is same with the state(5) to be set 00:17:05.310 [2024-07-12 06:41:44.803263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:05.310 [2024-07-12 06:41:44.803300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:05.310 [2024-07-12 06:41:44.803314] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:05.310 [2024-07-12 06:41:44.803343] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:05.310 [2024-07-12 06:41:44.804066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a6cf0 (9): Bad file descriptor 00:17:05.310 [2024-07-12 06:41:44.804116] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:05.310 [2024-07-12 06:41:44.804230] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:05.310 [2024-07-12 06:41:44.804340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.310 [2024-07-12 06:41:44.804366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.310 [2024-07-12 06:41:44.804384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.310 [2024-07-12 06:41:44.804397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.310 [2024-07-12 06:41:44.804411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.310 [2024-07-12 06:41:44.804424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.310 [2024-07-12 06:41:44.804437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.310 [2024-07-12 06:41:44.804450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.310 [2024-07-12 06:41:44.804464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.310 [2024-07-12 06:41:44.804477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.310 [2024-07-12 06:41:44.804490] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:05.310 [2024-07-12 06:41:44.804553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a7100 (9): Bad file descriptor 00:17:05.310 [2024-07-12 06:41:44.805540] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:05.310 [2024-07-12 06:41:44.805590] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:05.310 06:41:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.310 06:41:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:05.310 06:41:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:06.249 06:41:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:06.249 06:41:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:06.249 06:41:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:06.249 06:41:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:06.249 06:41:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:06.249 06:41:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:06.249 06:41:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:07.228 [2024-07-12 06:41:46.810761] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:07.228 [2024-07-12 06:41:46.810806] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:07.228 [2024-07-12 06:41:46.810826] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:07.228 [2024-07-12 06:41:46.816805] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:07.228 [2024-07-12 06:41:46.872264] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:07.228 [2024-07-12 06:41:46.872348] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:07.228 [2024-07-12 06:41:46.872370] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:07.228 [2024-07-12 06:41:46.872386] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:07.228 [2024-07-12 06:41:46.872395] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:07.228 [2024-07-12 06:41:46.879426] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18969a0 was disconnected and freed. delete nvme_qpair. 00:17:07.228 06:41:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:07.228 06:41:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:07.228 06:41:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:07.228 06:41:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:07.228 06:41:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:07.228 06:41:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.228 06:41:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:07.228 06:41:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:07.228 06:41:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:07.228 06:41:47 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:07.228 06:41:47 -- host/discovery_remove_ifc.sh@90 -- # killprocess 82777 00:17:07.228 06:41:47 -- common/autotest_common.sh@926 -- # '[' -z 82777 ']' 00:17:07.228 06:41:47 -- common/autotest_common.sh@930 -- # kill -0 82777 00:17:07.228 06:41:47 -- common/autotest_common.sh@931 -- # uname 00:17:07.228 06:41:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:07.228 06:41:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82777 00:17:07.228 06:41:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:07.228 06:41:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:07.228 killing process with pid 82777 00:17:07.228 06:41:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82777' 00:17:07.228 06:41:47 -- common/autotest_common.sh@945 -- # kill 82777 00:17:07.228 06:41:47 -- common/autotest_common.sh@950 -- # wait 82777 00:17:07.488 06:41:47 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:07.488 06:41:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:07.488 06:41:47 -- nvmf/common.sh@116 -- # sync 00:17:07.488 06:41:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:07.488 06:41:47 -- nvmf/common.sh@119 -- # set +e 00:17:07.488 06:41:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:07.488 06:41:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:07.488 rmmod nvme_tcp 00:17:07.488 rmmod nvme_fabrics 00:17:07.488 rmmod nvme_keyring 00:17:07.488 06:41:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:07.488 06:41:47 -- nvmf/common.sh@123 -- # set -e 00:17:07.488 06:41:47 -- nvmf/common.sh@124 -- # return 0 00:17:07.488 06:41:47 -- nvmf/common.sh@477 -- # '[' -n 82745 ']' 00:17:07.488 06:41:47 -- nvmf/common.sh@478 -- # killprocess 82745 00:17:07.488 06:41:47 -- common/autotest_common.sh@926 -- # '[' -z 82745 ']' 00:17:07.488 06:41:47 -- common/autotest_common.sh@930 -- # kill -0 82745 00:17:07.488 06:41:47 -- common/autotest_common.sh@931 -- # uname 00:17:07.488 06:41:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:07.488 06:41:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82745 00:17:07.488 06:41:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:07.488 06:41:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:07.488 killing process with pid 82745 00:17:07.488 06:41:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82745' 00:17:07.488 06:41:47 -- common/autotest_common.sh@945 -- # kill 82745 00:17:07.488 06:41:47 -- common/autotest_common.sh@950 -- # wait 82745 00:17:07.747 06:41:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:07.747 06:41:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:07.747 06:41:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:07.747 06:41:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.747 06:41:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:07.747 06:41:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.747 06:41:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.747 06:41:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.747 06:41:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:07.747 00:17:07.747 real 0m14.359s 00:17:07.747 user 0m22.619s 00:17:07.747 sys 0m2.443s 00:17:07.747 06:41:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.747 06:41:47 -- common/autotest_common.sh@10 -- # set +x 00:17:07.747 ************************************ 00:17:07.747 END TEST nvmf_discovery_remove_ifc 00:17:07.747 ************************************ 00:17:07.747 06:41:47 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:17:07.747 06:41:47 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:07.747 06:41:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:07.747 06:41:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:07.747 06:41:47 -- common/autotest_common.sh@10 -- # set +x 00:17:07.747 ************************************ 00:17:07.747 START TEST nvmf_digest 00:17:07.747 ************************************ 00:17:07.747 06:41:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:08.005 * Looking for test storage... 00:17:08.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:08.005 06:41:47 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:08.005 06:41:47 -- nvmf/common.sh@7 -- # uname -s 00:17:08.005 06:41:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.005 06:41:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.005 06:41:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.005 06:41:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.005 06:41:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.005 06:41:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.005 06:41:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.005 06:41:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.006 06:41:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.006 06:41:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.006 06:41:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:17:08.006 06:41:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:17:08.006 06:41:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.006 06:41:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.006 06:41:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:08.006 06:41:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:08.006 06:41:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.006 06:41:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.006 06:41:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.006 06:41:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.006 06:41:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.006 06:41:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.006 06:41:47 -- paths/export.sh@5 -- # export PATH 00:17:08.006 06:41:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.006 06:41:47 -- nvmf/common.sh@46 -- # : 0 00:17:08.006 06:41:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:08.006 06:41:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:08.006 06:41:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:08.006 06:41:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.006 06:41:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.006 06:41:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:08.006 06:41:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:08.006 06:41:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:08.006 06:41:47 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:08.006 06:41:47 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:08.006 06:41:47 -- host/digest.sh@16 -- # runtime=2 00:17:08.006 06:41:47 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:17:08.006 06:41:47 -- host/digest.sh@132 -- # nvmftestinit 00:17:08.006 06:41:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:08.006 06:41:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.006 06:41:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:08.006 06:41:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:08.006 06:41:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:08.006 06:41:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.006 06:41:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.006 06:41:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.006 06:41:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:08.006 06:41:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:08.006 06:41:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:08.006 06:41:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:08.006 06:41:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:08.006 06:41:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:08.006 06:41:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.006 06:41:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.006 06:41:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:08.006 06:41:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:08.006 06:41:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:08.006 06:41:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:08.006 06:41:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:08.006 06:41:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.006 06:41:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:08.006 06:41:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:08.006 06:41:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:08.006 06:41:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:08.006 06:41:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:08.006 06:41:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:08.006 Cannot find device "nvmf_tgt_br" 00:17:08.006 06:41:47 -- nvmf/common.sh@154 -- # true 00:17:08.006 06:41:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:08.006 Cannot find device "nvmf_tgt_br2" 00:17:08.006 06:41:47 -- nvmf/common.sh@155 -- # true 00:17:08.006 06:41:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:08.006 06:41:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:08.006 Cannot find device "nvmf_tgt_br" 00:17:08.006 06:41:47 -- nvmf/common.sh@157 -- # true 00:17:08.006 06:41:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:08.006 Cannot find device "nvmf_tgt_br2" 00:17:08.006 06:41:47 -- nvmf/common.sh@158 -- # true 00:17:08.006 06:41:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:08.006 06:41:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:08.006 06:41:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:08.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:08.006 06:41:47 -- nvmf/common.sh@161 -- # true 00:17:08.006 06:41:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:08.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:08.006 06:41:47 -- nvmf/common.sh@162 -- # true 00:17:08.006 06:41:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:08.006 06:41:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:08.006 06:41:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:08.006 06:41:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:08.006 06:41:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:08.006 06:41:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:08.006 06:41:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:08.006 06:41:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:08.006 06:41:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:08.006 06:41:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:08.006 06:41:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:08.006 06:41:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:08.265 06:41:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:08.265 06:41:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:08.265 06:41:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:08.265 06:41:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:08.265 06:41:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:08.265 06:41:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:08.265 06:41:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:08.265 06:41:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:08.265 06:41:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:08.265 06:41:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:08.265 06:41:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:08.265 06:41:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:08.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:17:08.265 00:17:08.265 --- 10.0.0.2 ping statistics --- 00:17:08.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.265 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:08.265 06:41:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:08.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:08.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:17:08.265 00:17:08.265 --- 10.0.0.3 ping statistics --- 00:17:08.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.265 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:08.265 06:41:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:08.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:08.265 00:17:08.265 --- 10.0.0.1 ping statistics --- 00:17:08.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.265 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:08.265 06:41:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.265 06:41:48 -- nvmf/common.sh@421 -- # return 0 00:17:08.265 06:41:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:08.265 06:41:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.265 06:41:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:08.265 06:41:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:08.265 06:41:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.265 06:41:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:08.265 06:41:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:08.265 06:41:48 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:08.265 06:41:48 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:17:08.265 06:41:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:08.265 06:41:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:08.265 06:41:48 -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 ************************************ 00:17:08.265 START TEST nvmf_digest_clean 00:17:08.265 ************************************ 00:17:08.265 06:41:48 -- common/autotest_common.sh@1104 -- # run_digest 00:17:08.265 06:41:48 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:17:08.265 06:41:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:08.265 06:41:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:08.265 06:41:48 -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 06:41:48 -- nvmf/common.sh@469 -- # nvmfpid=83178 00:17:08.265 06:41:48 -- nvmf/common.sh@470 -- # waitforlisten 83178 00:17:08.265 06:41:48 -- common/autotest_common.sh@819 -- # '[' -z 83178 ']' 00:17:08.265 06:41:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:08.265 06:41:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.265 06:41:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.265 06:41:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.265 06:41:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.265 06:41:48 -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 [2024-07-12 06:41:48.099239] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:08.265 [2024-07-12 06:41:48.099329] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.526 [2024-07-12 06:41:48.242226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.526 [2024-07-12 06:41:48.280272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:08.526 [2024-07-12 06:41:48.280429] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.526 [2024-07-12 06:41:48.280445] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.526 [2024-07-12 06:41:48.280456] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.526 [2024-07-12 06:41:48.280483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.463 06:41:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.463 06:41:49 -- common/autotest_common.sh@852 -- # return 0 00:17:09.463 06:41:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:09.463 06:41:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:09.463 06:41:49 -- common/autotest_common.sh@10 -- # set +x 00:17:09.463 06:41:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.463 06:41:49 -- host/digest.sh@120 -- # common_target_config 00:17:09.463 06:41:49 -- host/digest.sh@43 -- # rpc_cmd 00:17:09.463 06:41:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.463 06:41:49 -- common/autotest_common.sh@10 -- # set +x 00:17:09.463 null0 00:17:09.463 [2024-07-12 06:41:49.152013] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.463 [2024-07-12 06:41:49.176126] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.463 06:41:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.463 06:41:49 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:17:09.463 06:41:49 -- host/digest.sh@77 -- # local rw bs qd 00:17:09.463 06:41:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:09.463 06:41:49 -- host/digest.sh@80 -- # rw=randread 00:17:09.464 06:41:49 -- host/digest.sh@80 -- # bs=4096 00:17:09.464 06:41:49 -- host/digest.sh@80 -- # qd=128 00:17:09.464 06:41:49 -- host/digest.sh@82 -- # bperfpid=83210 00:17:09.464 06:41:49 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:09.464 06:41:49 -- host/digest.sh@83 -- # waitforlisten 83210 /var/tmp/bperf.sock 00:17:09.464 06:41:49 -- common/autotest_common.sh@819 -- # '[' -z 83210 ']' 00:17:09.464 06:41:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:09.464 06:41:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:09.464 06:41:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:09.464 06:41:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.464 06:41:49 -- common/autotest_common.sh@10 -- # set +x 00:17:09.464 [2024-07-12 06:41:49.226270] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:09.464 [2024-07-12 06:41:49.226385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83210 ] 00:17:09.464 [2024-07-12 06:41:49.357441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.723 [2024-07-12 06:41:49.392094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.290 06:41:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.290 06:41:50 -- common/autotest_common.sh@852 -- # return 0 00:17:10.291 06:41:50 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:10.291 06:41:50 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:10.291 06:41:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:10.550 06:41:50 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:10.550 06:41:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:10.808 nvme0n1 00:17:10.808 06:41:50 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:10.808 06:41:50 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:11.067 Running I/O for 2 seconds... 00:17:12.972 00:17:12.972 Latency(us) 00:17:12.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.972 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:12.972 nvme0n1 : 2.01 13071.24 51.06 0.00 0.00 9785.80 8698.41 25022.84 00:17:12.972 =================================================================================================================== 00:17:12.972 Total : 13071.24 51.06 0.00 0.00 9785.80 8698.41 25022.84 00:17:12.972 0 00:17:12.972 06:41:52 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:12.972 06:41:52 -- host/digest.sh@92 -- # get_accel_stats 00:17:12.972 06:41:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:12.972 06:41:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:12.972 | select(.opcode=="crc32c") 00:17:12.972 | "\(.module_name) \(.executed)"' 00:17:12.972 06:41:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:13.229 06:41:53 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:13.229 06:41:53 -- host/digest.sh@93 -- # exp_module=software 00:17:13.229 06:41:53 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:13.229 06:41:53 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:13.229 06:41:53 -- host/digest.sh@97 -- # killprocess 83210 00:17:13.229 06:41:53 -- common/autotest_common.sh@926 -- # '[' -z 83210 ']' 00:17:13.229 06:41:53 -- common/autotest_common.sh@930 -- # kill -0 83210 00:17:13.229 06:41:53 -- common/autotest_common.sh@931 -- # uname 00:17:13.229 06:41:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:13.229 06:41:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83210 00:17:13.229 06:41:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:13.229 06:41:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:13.229 killing process with pid 83210 00:17:13.229 06:41:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83210' 00:17:13.229 Received shutdown signal, test time was about 2.000000 seconds 00:17:13.229 00:17:13.229 Latency(us) 00:17:13.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.229 =================================================================================================================== 00:17:13.229 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:13.229 06:41:53 -- common/autotest_common.sh@945 -- # kill 83210 00:17:13.229 06:41:53 -- common/autotest_common.sh@950 -- # wait 83210 00:17:13.488 06:41:53 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:17:13.488 06:41:53 -- host/digest.sh@77 -- # local rw bs qd 00:17:13.488 06:41:53 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:13.488 06:41:53 -- host/digest.sh@80 -- # rw=randread 00:17:13.488 06:41:53 -- host/digest.sh@80 -- # bs=131072 00:17:13.488 06:41:53 -- host/digest.sh@80 -- # qd=16 00:17:13.488 06:41:53 -- host/digest.sh@82 -- # bperfpid=83265 00:17:13.488 06:41:53 -- host/digest.sh@83 -- # waitforlisten 83265 /var/tmp/bperf.sock 00:17:13.488 06:41:53 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:13.488 06:41:53 -- common/autotest_common.sh@819 -- # '[' -z 83265 ']' 00:17:13.488 06:41:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:13.488 06:41:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:13.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:13.488 06:41:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:13.488 06:41:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:13.488 06:41:53 -- common/autotest_common.sh@10 -- # set +x 00:17:13.488 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:13.488 Zero copy mechanism will not be used. 00:17:13.488 [2024-07-12 06:41:53.358744] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:13.488 [2024-07-12 06:41:53.358826] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83265 ] 00:17:13.747 [2024-07-12 06:41:53.502087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.747 [2024-07-12 06:41:53.546839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.747 06:41:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:13.747 06:41:53 -- common/autotest_common.sh@852 -- # return 0 00:17:13.747 06:41:53 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:13.747 06:41:53 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:13.747 06:41:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:14.006 06:41:53 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.006 06:41:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.574 nvme0n1 00:17:14.574 06:41:54 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:14.574 06:41:54 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:14.574 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:14.574 Zero copy mechanism will not be used. 00:17:14.574 Running I/O for 2 seconds... 00:17:16.478 00:17:16.478 Latency(us) 00:17:16.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.478 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:16.478 nvme0n1 : 2.00 6439.47 804.93 0.00 0.00 2480.89 2070.34 4170.47 00:17:16.478 =================================================================================================================== 00:17:16.478 Total : 6439.47 804.93 0.00 0.00 2480.89 2070.34 4170.47 00:17:16.478 0 00:17:16.478 06:41:56 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:16.478 06:41:56 -- host/digest.sh@92 -- # get_accel_stats 00:17:16.478 06:41:56 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:16.478 06:41:56 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:16.478 | select(.opcode=="crc32c") 00:17:16.478 | "\(.module_name) \(.executed)"' 00:17:16.478 06:41:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:16.737 06:41:56 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:16.737 06:41:56 -- host/digest.sh@93 -- # exp_module=software 00:17:16.737 06:41:56 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:16.737 06:41:56 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:16.737 06:41:56 -- host/digest.sh@97 -- # killprocess 83265 00:17:16.737 06:41:56 -- common/autotest_common.sh@926 -- # '[' -z 83265 ']' 00:17:16.737 06:41:56 -- common/autotest_common.sh@930 -- # kill -0 83265 00:17:16.737 06:41:56 -- common/autotest_common.sh@931 -- # uname 00:17:16.737 06:41:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:16.737 06:41:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83265 00:17:16.737 06:41:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:16.737 06:41:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:16.737 06:41:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83265' 00:17:16.737 killing process with pid 83265 00:17:16.737 06:41:56 -- common/autotest_common.sh@945 -- # kill 83265 00:17:16.737 Received shutdown signal, test time was about 2.000000 seconds 00:17:16.737 00:17:16.737 Latency(us) 00:17:16.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.737 =================================================================================================================== 00:17:16.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.737 06:41:56 -- common/autotest_common.sh@950 -- # wait 83265 00:17:16.995 06:41:56 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:17:16.995 06:41:56 -- host/digest.sh@77 -- # local rw bs qd 00:17:16.995 06:41:56 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:16.995 06:41:56 -- host/digest.sh@80 -- # rw=randwrite 00:17:16.995 06:41:56 -- host/digest.sh@80 -- # bs=4096 00:17:16.995 06:41:56 -- host/digest.sh@80 -- # qd=128 00:17:16.995 06:41:56 -- host/digest.sh@82 -- # bperfpid=83318 00:17:16.995 06:41:56 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:16.995 06:41:56 -- host/digest.sh@83 -- # waitforlisten 83318 /var/tmp/bperf.sock 00:17:16.995 06:41:56 -- common/autotest_common.sh@819 -- # '[' -z 83318 ']' 00:17:16.995 06:41:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:16.995 06:41:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:16.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:16.995 06:41:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:16.995 06:41:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:16.996 06:41:56 -- common/autotest_common.sh@10 -- # set +x 00:17:16.996 [2024-07-12 06:41:56.851062] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:16.996 [2024-07-12 06:41:56.851168] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83318 ] 00:17:17.253 [2024-07-12 06:41:56.996114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.253 [2024-07-12 06:41:57.037344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.253 06:41:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:17.253 06:41:57 -- common/autotest_common.sh@852 -- # return 0 00:17:17.253 06:41:57 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:17.253 06:41:57 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:17.253 06:41:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:17.511 06:41:57 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:17.511 06:41:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:18.077 nvme0n1 00:17:18.077 06:41:57 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:18.077 06:41:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:18.077 Running I/O for 2 seconds... 00:17:19.992 00:17:19.992 Latency(us) 00:17:19.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.992 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.992 nvme0n1 : 2.01 13998.22 54.68 0.00 0.00 9135.37 8221.79 17754.30 00:17:19.992 =================================================================================================================== 00:17:19.992 Total : 13998.22 54.68 0.00 0.00 9135.37 8221.79 17754.30 00:17:19.992 0 00:17:19.992 06:41:59 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:19.992 06:41:59 -- host/digest.sh@92 -- # get_accel_stats 00:17:19.992 06:41:59 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:19.992 06:41:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:19.992 06:41:59 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:19.992 | select(.opcode=="crc32c") 00:17:19.992 | "\(.module_name) \(.executed)"' 00:17:20.271 06:42:00 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:20.271 06:42:00 -- host/digest.sh@93 -- # exp_module=software 00:17:20.271 06:42:00 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:20.271 06:42:00 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:20.271 06:42:00 -- host/digest.sh@97 -- # killprocess 83318 00:17:20.271 06:42:00 -- common/autotest_common.sh@926 -- # '[' -z 83318 ']' 00:17:20.271 06:42:00 -- common/autotest_common.sh@930 -- # kill -0 83318 00:17:20.271 06:42:00 -- common/autotest_common.sh@931 -- # uname 00:17:20.271 06:42:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:20.271 06:42:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83318 00:17:20.271 killing process with pid 83318 00:17:20.271 Received shutdown signal, test time was about 2.000000 seconds 00:17:20.271 00:17:20.271 Latency(us) 00:17:20.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.271 =================================================================================================================== 00:17:20.271 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.271 06:42:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:20.271 06:42:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:20.271 06:42:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83318' 00:17:20.271 06:42:00 -- common/autotest_common.sh@945 -- # kill 83318 00:17:20.271 06:42:00 -- common/autotest_common.sh@950 -- # wait 83318 00:17:20.529 06:42:00 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:17:20.529 06:42:00 -- host/digest.sh@77 -- # local rw bs qd 00:17:20.529 06:42:00 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:20.529 06:42:00 -- host/digest.sh@80 -- # rw=randwrite 00:17:20.529 06:42:00 -- host/digest.sh@80 -- # bs=131072 00:17:20.529 06:42:00 -- host/digest.sh@80 -- # qd=16 00:17:20.529 06:42:00 -- host/digest.sh@82 -- # bperfpid=83369 00:17:20.529 06:42:00 -- host/digest.sh@83 -- # waitforlisten 83369 /var/tmp/bperf.sock 00:17:20.529 06:42:00 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:20.529 06:42:00 -- common/autotest_common.sh@819 -- # '[' -z 83369 ']' 00:17:20.529 06:42:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:20.529 06:42:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:20.529 06:42:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:20.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:20.529 06:42:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:20.529 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:17:20.529 [2024-07-12 06:42:00.404335] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:20.529 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:20.529 Zero copy mechanism will not be used. 00:17:20.529 [2024-07-12 06:42:00.405087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83369 ] 00:17:20.788 [2024-07-12 06:42:00.546749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.788 [2024-07-12 06:42:00.586144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.788 06:42:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:20.788 06:42:00 -- common/autotest_common.sh@852 -- # return 0 00:17:20.788 06:42:00 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:20.788 06:42:00 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:20.788 06:42:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:21.047 06:42:00 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:21.047 06:42:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:21.613 nvme0n1 00:17:21.613 06:42:01 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:21.613 06:42:01 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:21.613 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:21.613 Zero copy mechanism will not be used. 00:17:21.613 Running I/O for 2 seconds... 00:17:23.516 00:17:23.516 Latency(us) 00:17:23.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.516 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:23.516 nvme0n1 : 2.00 6427.71 803.46 0.00 0.00 2483.23 1757.56 4468.36 00:17:23.516 =================================================================================================================== 00:17:23.516 Total : 6427.71 803.46 0.00 0.00 2483.23 1757.56 4468.36 00:17:23.516 0 00:17:23.776 06:42:03 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:23.776 06:42:03 -- host/digest.sh@92 -- # get_accel_stats 00:17:23.776 06:42:03 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:23.776 06:42:03 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:23.776 | select(.opcode=="crc32c") 00:17:23.776 | "\(.module_name) \(.executed)"' 00:17:23.776 06:42:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:24.035 06:42:03 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:24.035 06:42:03 -- host/digest.sh@93 -- # exp_module=software 00:17:24.035 06:42:03 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:24.035 06:42:03 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:24.035 06:42:03 -- host/digest.sh@97 -- # killprocess 83369 00:17:24.035 06:42:03 -- common/autotest_common.sh@926 -- # '[' -z 83369 ']' 00:17:24.035 06:42:03 -- common/autotest_common.sh@930 -- # kill -0 83369 00:17:24.035 06:42:03 -- common/autotest_common.sh@931 -- # uname 00:17:24.035 06:42:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:24.035 06:42:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83369 00:17:24.035 killing process with pid 83369 00:17:24.035 Received shutdown signal, test time was about 2.000000 seconds 00:17:24.035 00:17:24.035 Latency(us) 00:17:24.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.035 =================================================================================================================== 00:17:24.035 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.035 06:42:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:24.035 06:42:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:24.035 06:42:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83369' 00:17:24.035 06:42:03 -- common/autotest_common.sh@945 -- # kill 83369 00:17:24.035 06:42:03 -- common/autotest_common.sh@950 -- # wait 83369 00:17:24.035 06:42:03 -- host/digest.sh@126 -- # killprocess 83178 00:17:24.035 06:42:03 -- common/autotest_common.sh@926 -- # '[' -z 83178 ']' 00:17:24.035 06:42:03 -- common/autotest_common.sh@930 -- # kill -0 83178 00:17:24.035 06:42:03 -- common/autotest_common.sh@931 -- # uname 00:17:24.035 06:42:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:24.035 06:42:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83178 00:17:24.035 killing process with pid 83178 00:17:24.035 06:42:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:24.035 06:42:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:24.035 06:42:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83178' 00:17:24.035 06:42:03 -- common/autotest_common.sh@945 -- # kill 83178 00:17:24.035 06:42:03 -- common/autotest_common.sh@950 -- # wait 83178 00:17:24.293 ************************************ 00:17:24.293 END TEST nvmf_digest_clean 00:17:24.293 ************************************ 00:17:24.293 00:17:24.293 real 0m15.994s 00:17:24.293 user 0m30.717s 00:17:24.293 sys 0m4.370s 00:17:24.293 06:42:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.293 06:42:04 -- common/autotest_common.sh@10 -- # set +x 00:17:24.293 06:42:04 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:17:24.293 06:42:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:24.293 06:42:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:24.293 06:42:04 -- common/autotest_common.sh@10 -- # set +x 00:17:24.293 ************************************ 00:17:24.293 START TEST nvmf_digest_error 00:17:24.293 ************************************ 00:17:24.293 06:42:04 -- common/autotest_common.sh@1104 -- # run_digest_error 00:17:24.293 06:42:04 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:17:24.293 06:42:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:24.294 06:42:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:24.294 06:42:04 -- common/autotest_common.sh@10 -- # set +x 00:17:24.294 06:42:04 -- nvmf/common.sh@469 -- # nvmfpid=83446 00:17:24.294 06:42:04 -- nvmf/common.sh@470 -- # waitforlisten 83446 00:17:24.294 06:42:04 -- common/autotest_common.sh@819 -- # '[' -z 83446 ']' 00:17:24.294 06:42:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.294 06:42:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:24.294 06:42:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:24.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.294 06:42:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.294 06:42:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:24.294 06:42:04 -- common/autotest_common.sh@10 -- # set +x 00:17:24.294 [2024-07-12 06:42:04.137912] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:24.294 [2024-07-12 06:42:04.137998] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.552 [2024-07-12 06:42:04.275204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.552 [2024-07-12 06:42:04.306644] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:24.552 [2024-07-12 06:42:04.306809] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.552 [2024-07-12 06:42:04.306825] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.552 [2024-07-12 06:42:04.306834] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.552 [2024-07-12 06:42:04.306863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.552 06:42:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:24.552 06:42:04 -- common/autotest_common.sh@852 -- # return 0 00:17:24.552 06:42:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:24.552 06:42:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:24.552 06:42:04 -- common/autotest_common.sh@10 -- # set +x 00:17:24.552 06:42:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.552 06:42:04 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:24.552 06:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.552 06:42:04 -- common/autotest_common.sh@10 -- # set +x 00:17:24.552 [2024-07-12 06:42:04.411264] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:24.552 06:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.552 06:42:04 -- host/digest.sh@104 -- # common_target_config 00:17:24.552 06:42:04 -- host/digest.sh@43 -- # rpc_cmd 00:17:24.552 06:42:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.552 06:42:04 -- common/autotest_common.sh@10 -- # set +x 00:17:24.811 null0 00:17:24.811 [2024-07-12 06:42:04.480503] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.811 [2024-07-12 06:42:04.504525] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.811 06:42:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.811 06:42:04 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:17:24.811 06:42:04 -- host/digest.sh@54 -- # local rw bs qd 00:17:24.811 06:42:04 -- host/digest.sh@56 -- # rw=randread 00:17:24.811 06:42:04 -- host/digest.sh@56 -- # bs=4096 00:17:24.811 06:42:04 -- host/digest.sh@56 -- # qd=128 00:17:24.811 06:42:04 -- host/digest.sh@58 -- # bperfpid=83465 00:17:24.811 06:42:04 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:24.811 06:42:04 -- host/digest.sh@60 -- # waitforlisten 83465 /var/tmp/bperf.sock 00:17:24.811 06:42:04 -- common/autotest_common.sh@819 -- # '[' -z 83465 ']' 00:17:24.811 06:42:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:24.811 06:42:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:24.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:24.811 06:42:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:24.811 06:42:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:24.811 06:42:04 -- common/autotest_common.sh@10 -- # set +x 00:17:24.811 [2024-07-12 06:42:04.558354] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:24.811 [2024-07-12 06:42:04.558450] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83465 ] 00:17:24.811 [2024-07-12 06:42:04.692903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.811 [2024-07-12 06:42:04.725681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.747 06:42:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:25.747 06:42:05 -- common/autotest_common.sh@852 -- # return 0 00:17:25.747 06:42:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:25.747 06:42:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:26.007 06:42:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:26.007 06:42:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.007 06:42:05 -- common/autotest_common.sh@10 -- # set +x 00:17:26.007 06:42:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.007 06:42:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:26.007 06:42:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:26.266 nvme0n1 00:17:26.266 06:42:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:26.266 06:42:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.266 06:42:06 -- common/autotest_common.sh@10 -- # set +x 00:17:26.266 06:42:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.266 06:42:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:26.266 06:42:06 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:26.524 Running I/O for 2 seconds... 00:17:26.524 [2024-07-12 06:42:06.244632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.524 [2024-07-12 06:42:06.244709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.524 [2024-07-12 06:42:06.244723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.524 [2024-07-12 06:42:06.260133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.524 [2024-07-12 06:42:06.260182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.524 [2024-07-12 06:42:06.260194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.524 [2024-07-12 06:42:06.275177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.524 [2024-07-12 06:42:06.275226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.524 [2024-07-12 06:42:06.275238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.524 [2024-07-12 06:42:06.291241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.524 [2024-07-12 06:42:06.291307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.524 [2024-07-12 06:42:06.291320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.524 [2024-07-12 06:42:06.308801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.524 [2024-07-12 06:42:06.308850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.525 [2024-07-12 06:42:06.308862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.525 [2024-07-12 06:42:06.325135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.525 [2024-07-12 06:42:06.325186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.525 [2024-07-12 06:42:06.325197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.525 [2024-07-12 06:42:06.340070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.525 [2024-07-12 06:42:06.340118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.525 [2024-07-12 06:42:06.340130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.525 [2024-07-12 06:42:06.355053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.525 [2024-07-12 06:42:06.355101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.525 [2024-07-12 06:42:06.355114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.525 [2024-07-12 06:42:06.369829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.525 [2024-07-12 06:42:06.369877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.525 [2024-07-12 06:42:06.369889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.525 [2024-07-12 06:42:06.385021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.525 [2024-07-12 06:42:06.385069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.525 [2024-07-12 06:42:06.385082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.525 [2024-07-12 06:42:06.401148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.525 [2024-07-12 06:42:06.401181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.525 [2024-07-12 06:42:06.401193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.525 [2024-07-12 06:42:06.418168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.525 [2024-07-12 06:42:06.418216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.525 [2024-07-12 06:42:06.418228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.525 [2024-07-12 06:42:06.434267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.525 [2024-07-12 06:42:06.434348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.525 [2024-07-12 06:42:06.434360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.784 [2024-07-12 06:42:06.451178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.784 [2024-07-12 06:42:06.451240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.784 [2024-07-12 06:42:06.451252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.784 [2024-07-12 06:42:06.467082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.784 [2024-07-12 06:42:06.467130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.784 [2024-07-12 06:42:06.467142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.784 [2024-07-12 06:42:06.484509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.784 [2024-07-12 06:42:06.484546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.784 [2024-07-12 06:42:06.484562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.784 [2024-07-12 06:42:06.501008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.784 [2024-07-12 06:42:06.501055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.784 [2024-07-12 06:42:06.501067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.784 [2024-07-12 06:42:06.516816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.784 [2024-07-12 06:42:06.516864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.784 [2024-07-12 06:42:06.516877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.784 [2024-07-12 06:42:06.532577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.784 [2024-07-12 06:42:06.532626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.784 [2024-07-12 06:42:06.532639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.784 [2024-07-12 06:42:06.549166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.784 [2024-07-12 06:42:06.549214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.784 [2024-07-12 06:42:06.549226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.784 [2024-07-12 06:42:06.564185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.784 [2024-07-12 06:42:06.564232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.784 [2024-07-12 06:42:06.564244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.784 [2024-07-12 06:42:06.579302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.784 [2024-07-12 06:42:06.579349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.784 [2024-07-12 06:42:06.579362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.784 [2024-07-12 06:42:06.594619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.784 [2024-07-12 06:42:06.594699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.784 [2024-07-12 06:42:06.594712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.784 [2024-07-12 06:42:06.609577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.785 [2024-07-12 06:42:06.609625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.785 [2024-07-12 06:42:06.609636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.785 [2024-07-12 06:42:06.624580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.785 [2024-07-12 06:42:06.624627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.785 [2024-07-12 06:42:06.624639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.785 [2024-07-12 06:42:06.639613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.785 [2024-07-12 06:42:06.639662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.785 [2024-07-12 06:42:06.639674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.785 [2024-07-12 06:42:06.654706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.785 [2024-07-12 06:42:06.654755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.785 [2024-07-12 06:42:06.654768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.785 [2024-07-12 06:42:06.669692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.785 [2024-07-12 06:42:06.669740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.785 [2024-07-12 06:42:06.669752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.785 [2024-07-12 06:42:06.684889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.785 [2024-07-12 06:42:06.684936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.785 [2024-07-12 06:42:06.684948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.785 [2024-07-12 06:42:06.699897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:26.785 [2024-07-12 06:42:06.699945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.785 [2024-07-12 06:42:06.699957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.715747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.715795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.715807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.730854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.730906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.730920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.746694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.746746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.746759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.763068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.763116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.763129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.778755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.778805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.778818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.794513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.794549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.794563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.812590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.812641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.812668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.829301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.829349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.829360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.845619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.845682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.845695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.862162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.862211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.862223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.878781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.878835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.878849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.894210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.894258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.894271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.909328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.909377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.909389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.924619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.924667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.044 [2024-07-12 06:42:06.924679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.044 [2024-07-12 06:42:06.939912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.044 [2024-07-12 06:42:06.939945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.045 [2024-07-12 06:42:06.939966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.045 [2024-07-12 06:42:06.955061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.045 [2024-07-12 06:42:06.955108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.045 [2024-07-12 06:42:06.955121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:06.970857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:06.970894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:06.970907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:06.986234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:06.986282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:06.986294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.001166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.001225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.001238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.016343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.016392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.016403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.031448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.031496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.031509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.047746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.047794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.047807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.064962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.065033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.065047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.080433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.080481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.080493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.095514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.095561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.095573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.110400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.110449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.110460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.125580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.125629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.125641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.140722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.140771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.140783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.155703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.155751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.155762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.170688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.170738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.170751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.185762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.185810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.185822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.200843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.303 [2024-07-12 06:42:07.200891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.303 [2024-07-12 06:42:07.200902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.303 [2024-07-12 06:42:07.215836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.304 [2024-07-12 06:42:07.215885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.304 [2024-07-12 06:42:07.215897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.238577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.238648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.238662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.255246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.255296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.255323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.271563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.271610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.271622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.287338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.287386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.287398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.303403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.303453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.303466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.321541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.321577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.321590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.339020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.339068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.339081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.356556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.356607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.356619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.372794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.372843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.372855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.389150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.389197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.389210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.406540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.406592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.406605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.422554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.422642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.422657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.437774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.437823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.437835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.452805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.452854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.452865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.467850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.467898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.467910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.562 [2024-07-12 06:42:07.483321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.562 [2024-07-12 06:42:07.483371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.562 [2024-07-12 06:42:07.483383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.498808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.498858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.498871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.513892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.513940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.513952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.528948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.529017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.529029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.544160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.544208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.544221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.561417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.561453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.561466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.578802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.578839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.578853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.596071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.596147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.596161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.612691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.612759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.612773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.629998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.630087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.630108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.648309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.648357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.648371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.666301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.666347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.666361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.684021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.684069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.684083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.702119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.702164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.702178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.720298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.720345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.720360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.820 [2024-07-12 06:42:07.738463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:27.820 [2024-07-12 06:42:07.738511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.820 [2024-07-12 06:42:07.738525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.105 [2024-07-12 06:42:07.756517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.105 [2024-07-12 06:42:07.756580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.105 [2024-07-12 06:42:07.756603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.105 [2024-07-12 06:42:07.774608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.105 [2024-07-12 06:42:07.774668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.105 [2024-07-12 06:42:07.774683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.105 [2024-07-12 06:42:07.792666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.105 [2024-07-12 06:42:07.792712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.105 [2024-07-12 06:42:07.792725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.105 [2024-07-12 06:42:07.810849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.105 [2024-07-12 06:42:07.810892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.105 [2024-07-12 06:42:07.810906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.105 [2024-07-12 06:42:07.828825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.105 [2024-07-12 06:42:07.828870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.105 [2024-07-12 06:42:07.828884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.105 [2024-07-12 06:42:07.846800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.105 [2024-07-12 06:42:07.846848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.105 [2024-07-12 06:42:07.846863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.105 [2024-07-12 06:42:07.864986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.105 [2024-07-12 06:42:07.865032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.105 [2024-07-12 06:42:07.865046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.105 [2024-07-12 06:42:07.883058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.105 [2024-07-12 06:42:07.883103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.105 [2024-07-12 06:42:07.883118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.105 [2024-07-12 06:42:07.901275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.105 [2024-07-12 06:42:07.901327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.105 [2024-07-12 06:42:07.901341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.106 [2024-07-12 06:42:07.919600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.106 [2024-07-12 06:42:07.919647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.106 [2024-07-12 06:42:07.919662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.106 [2024-07-12 06:42:07.937859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.106 [2024-07-12 06:42:07.937919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.106 [2024-07-12 06:42:07.937947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.106 [2024-07-12 06:42:07.956317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.106 [2024-07-12 06:42:07.956362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.106 [2024-07-12 06:42:07.956376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.106 [2024-07-12 06:42:07.974596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.106 [2024-07-12 06:42:07.974666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.106 [2024-07-12 06:42:07.974682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.106 [2024-07-12 06:42:07.993089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.106 [2024-07-12 06:42:07.993144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.106 [2024-07-12 06:42:07.993159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.106 [2024-07-12 06:42:08.011173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.106 [2024-07-12 06:42:08.011219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.106 [2024-07-12 06:42:08.011234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.029204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.029249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.029263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.047208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.047252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.047266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.065274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.065318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.065332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.083355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.083417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.083440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.101290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.101338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.101352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.119442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.119487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.119501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.137472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.137517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.137531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.155569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.155612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.155626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.173822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.173867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.173881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.192105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.192152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.192168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.210290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.210336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.210351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 [2024-07-12 06:42:08.228118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24db1b0) 00:17:28.365 [2024-07-12 06:42:08.228162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.365 [2024-07-12 06:42:08.228178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.365 00:17:28.365 Latency(us) 00:17:28.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.365 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:28.365 nvme0n1 : 2.01 15363.18 60.01 0.00 0.00 8324.62 7238.75 31218.97 00:17:28.365 =================================================================================================================== 00:17:28.365 Total : 15363.18 60.01 0.00 0.00 8324.62 7238.75 31218.97 00:17:28.365 0 00:17:28.365 06:42:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:28.365 06:42:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:28.365 | .driver_specific 00:17:28.365 | .nvme_error 00:17:28.365 | .status_code 00:17:28.365 | .command_transient_transport_error' 00:17:28.365 06:42:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:28.365 06:42:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:28.623 06:42:08 -- host/digest.sh@71 -- # (( 121 > 0 )) 00:17:28.623 06:42:08 -- host/digest.sh@73 -- # killprocess 83465 00:17:28.623 06:42:08 -- common/autotest_common.sh@926 -- # '[' -z 83465 ']' 00:17:28.623 06:42:08 -- common/autotest_common.sh@930 -- # kill -0 83465 00:17:28.881 06:42:08 -- common/autotest_common.sh@931 -- # uname 00:17:28.881 06:42:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.881 06:42:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83465 00:17:28.881 06:42:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:28.881 06:42:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:28.881 killing process with pid 83465 00:17:28.881 06:42:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83465' 00:17:28.881 06:42:08 -- common/autotest_common.sh@945 -- # kill 83465 00:17:28.881 Received shutdown signal, test time was about 2.000000 seconds 00:17:28.881 00:17:28.881 Latency(us) 00:17:28.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.881 =================================================================================================================== 00:17:28.881 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.881 06:42:08 -- common/autotest_common.sh@950 -- # wait 83465 00:17:28.881 06:42:08 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:17:28.881 06:42:08 -- host/digest.sh@54 -- # local rw bs qd 00:17:28.881 06:42:08 -- host/digest.sh@56 -- # rw=randread 00:17:28.881 06:42:08 -- host/digest.sh@56 -- # bs=131072 00:17:28.881 06:42:08 -- host/digest.sh@56 -- # qd=16 00:17:28.881 06:42:08 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:28.881 06:42:08 -- host/digest.sh@58 -- # bperfpid=83525 00:17:28.881 06:42:08 -- host/digest.sh@60 -- # waitforlisten 83525 /var/tmp/bperf.sock 00:17:28.881 06:42:08 -- common/autotest_common.sh@819 -- # '[' -z 83525 ']' 00:17:28.881 06:42:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:28.881 06:42:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:28.881 06:42:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:28.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:28.881 06:42:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:28.881 06:42:08 -- common/autotest_common.sh@10 -- # set +x 00:17:28.881 [2024-07-12 06:42:08.749897] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:28.881 [2024-07-12 06:42:08.749995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83525 ] 00:17:28.881 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:28.881 Zero copy mechanism will not be used. 00:17:29.139 [2024-07-12 06:42:08.887411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.139 [2024-07-12 06:42:08.923874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.139 06:42:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:29.139 06:42:09 -- common/autotest_common.sh@852 -- # return 0 00:17:29.139 06:42:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:29.139 06:42:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:29.397 06:42:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:29.397 06:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:29.397 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:17:29.397 06:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:29.397 06:42:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:29.397 06:42:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:29.963 nvme0n1 00:17:29.963 06:42:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:29.963 06:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:29.963 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:17:29.963 06:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:29.963 06:42:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:29.963 06:42:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:29.963 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:29.963 Zero copy mechanism will not be used. 00:17:29.963 Running I/O for 2 seconds... 00:17:29.963 [2024-07-12 06:42:09.771124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.963 [2024-07-12 06:42:09.771182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.963 [2024-07-12 06:42:09.771198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.963 [2024-07-12 06:42:09.775736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.963 [2024-07-12 06:42:09.775835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.963 [2024-07-12 06:42:09.775878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.963 [2024-07-12 06:42:09.780879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.963 [2024-07-12 06:42:09.780918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.963 [2024-07-12 06:42:09.780932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.963 [2024-07-12 06:42:09.785434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.963 [2024-07-12 06:42:09.785502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.963 [2024-07-12 06:42:09.785555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.963 [2024-07-12 06:42:09.790299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.963 [2024-07-12 06:42:09.790339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.963 [2024-07-12 06:42:09.790353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.963 [2024-07-12 06:42:09.794842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.963 [2024-07-12 06:42:09.794880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.963 [2024-07-12 06:42:09.794894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.963 [2024-07-12 06:42:09.799386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.963 [2024-07-12 06:42:09.799432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.963 [2024-07-12 06:42:09.799445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.963 [2024-07-12 06:42:09.804212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.963 [2024-07-12 06:42:09.804250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.963 [2024-07-12 06:42:09.804264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.963 [2024-07-12 06:42:09.808970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.809024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.809058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.813718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.813757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.813770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.818386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.818435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.818447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.822993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.823050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.823077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.827600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.827649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.827661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.832202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.832250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.832278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.836413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.836450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.836463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.840700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.840744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.840761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.845135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.845183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.845209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.850129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.850166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.850179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.854875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.854912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.854925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.859644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.859725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.859738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.864412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.864448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.864461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.868935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.869011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.869024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.873727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.873775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.873788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.878434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.878484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.878497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.964 [2024-07-12 06:42:09.883135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:29.964 [2024-07-12 06:42:09.883184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.964 [2024-07-12 06:42:09.883198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.887871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.887919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.887947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.892669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.892717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.892729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.897215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.897262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.897291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.901711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.901761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.901773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.905949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.906004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.906016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.910215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.910262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.910274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.914413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.914463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.914476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.918720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.918756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.918769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.922944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.923017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.923044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.927172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.927206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.927220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.931413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.931461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.931473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.935603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.935651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.935663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.939871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.939921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.939934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.944165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.944211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.944223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.948336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.224 [2024-07-12 06:42:09.948385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.224 [2024-07-12 06:42:09.948397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.224 [2024-07-12 06:42:09.952819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.952867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.952878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:09.957110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.957157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.957168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:09.961305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.961354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.961366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:09.965679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.965726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.965737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:09.969895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.969943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.969955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:09.973890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.973937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.973949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:09.978322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.978371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.978383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:09.982587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.982659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.982696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:09.986757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.986808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.986821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:09.991310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.991357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.991369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:09.995521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.995568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.995579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:09.999789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:09.999836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:09.999847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.004463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.004499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.004512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.008913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.008963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.008988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.013214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.013262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.013274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.017847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.017903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.017923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.023206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.023246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.023260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.027602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.027666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.027680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.031992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.032049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.032061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.036233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.036281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.036293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.040421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.040469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.040480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.044591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.044637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.044648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.048843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.048889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.048901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.052926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.052982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.052995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.056984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.057028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.057039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.061083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.061129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.061140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.065175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.065220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.065232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.069213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.225 [2024-07-12 06:42:10.069259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.225 [2024-07-12 06:42:10.069271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.225 [2024-07-12 06:42:10.073278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.073324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.073351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.077512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.077545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.077557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.081723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.081770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.081781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.085854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.085901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.085913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.090052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.090098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.090110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.094129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.094175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.094186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.098129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.098174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.098186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.102143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.102188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.102200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.106147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.106193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.106205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.110206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.110252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.110264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.114208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.114254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.114265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.118265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.118327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.118338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.122525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.122572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.122584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.126595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.126652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.126665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.130813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.130847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.130860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.134876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.134911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.134923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.139126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.139172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.139183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.226 [2024-07-12 06:42:10.143915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.226 [2024-07-12 06:42:10.143963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.226 [2024-07-12 06:42:10.144002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.148628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.148691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.148703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.153380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.153427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.153439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.157707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.157754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.157766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.161976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.162032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.162044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.166142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.166188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.166200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.170844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.170880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.170893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.175240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.175286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.175297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.179583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.179630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.179641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.183703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.183749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.183761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.187773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.187809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.187836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.191943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.192006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.192034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.195917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.486 [2024-07-12 06:42:10.195976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.486 [2024-07-12 06:42:10.196007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.486 [2024-07-12 06:42:10.200007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.200042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.200069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.204037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.204073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.204100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.208052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.208087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.208115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.212156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.212192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.212220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.216185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.216221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.216248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.220267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.220303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.220331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.224287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.224322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.224350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.228332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.228382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.228409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.232493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.232528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.232556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.236578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.236613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.236640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.240773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.240808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.240836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.244857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.244892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.244919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.249057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.249091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.249118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.253025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.253058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.253085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.257067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.257101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.257129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.261128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.261162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.261188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.265160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.265194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.265221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.269250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.269300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.269328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.273258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.273308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.273335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.277422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.277457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.277485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.281580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.281616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.281644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.285712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.285747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.285774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.289759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.289794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.289821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.293747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.293782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.293809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.297805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.297841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.297868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.301923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.301984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.302014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.306089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.306124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.306151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.310073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.310107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.310133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.487 [2024-07-12 06:42:10.314105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.487 [2024-07-12 06:42:10.314140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.487 [2024-07-12 06:42:10.314167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.318142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.318175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.318202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.322152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.322185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.322212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.326229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.326264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.326291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.330459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.330495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.330522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.334497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.334532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.334559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.338576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.338610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.338663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.342714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.342754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.342768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.346842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.346881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.346895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.350952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.351057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.351086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.355066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.355099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.355126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.359126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.359159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.359186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.363668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.363704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.363731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.368475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.368513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.368525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.372879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.372916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.372944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.377512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.377551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.377565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.382411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.382452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.382466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.387043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.387077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.387105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.391535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.391576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.391590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.396343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.396395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.396408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.400962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.401023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.401053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.488 [2024-07-12 06:42:10.405740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.488 [2024-07-12 06:42:10.405777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.488 [2024-07-12 06:42:10.405805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.410682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.410723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.410737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.415581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.415635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.415648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.420158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.420194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.420223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.424509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.424547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.424576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.428969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.429035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.429067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.433242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.433307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.433336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.437699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.437742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.437770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.442147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.442180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.442207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.446225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.446260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.446288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.450360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.450395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.450423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.454444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.454479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.454506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.458558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.458593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.458645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.463083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.463133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.463162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.467654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.467689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.467716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.472118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.472152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.472180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.476570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.476622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.476634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.480898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.480933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.749 [2024-07-12 06:42:10.480960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.749 [2024-07-12 06:42:10.485233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.749 [2024-07-12 06:42:10.485284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.485327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.489830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.489869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.489898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.494128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.494162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.494190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.498493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.498530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.498558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.503005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.503061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.503074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.507115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.507159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.507187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.511168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.511212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.511240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.515226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.515272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.515301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.519304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.519345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.519387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.523489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.523529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.523556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.527658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.527712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.527740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.531883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.531922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.531950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.536029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.536068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.536096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.540096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.540136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.540164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.544127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.544165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.544193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.548157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.548192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.548219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.552206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.552246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.552273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.556286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.556330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.556358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.560332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.560393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.560421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.564453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.564496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.564524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.568518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.568558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.568586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.572524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.572564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.572592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.576567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.576607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.576635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.580793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.580831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.580858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.584989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.585034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.585061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.588937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.588998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.589027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.593324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.593361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.593389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.597632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.597684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.597712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.601857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.601893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.750 [2024-07-12 06:42:10.601921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.750 [2024-07-12 06:42:10.606025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.750 [2024-07-12 06:42:10.606058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.606086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.610048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.610081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.610108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.614147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.614182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.614210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.618178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.618212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.618240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.622506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.622556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.622583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.627002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.627079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.627092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.631498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.631533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.631561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.635632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.635665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.635692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.639711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.639745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.639772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.643821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.643857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.643884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.647989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.648023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.648050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.652034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.652069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.652096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.655977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.656021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.656048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.659943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.660020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.660049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.663955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.664032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.664060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.751 [2024-07-12 06:42:10.668648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:30.751 [2024-07-12 06:42:10.668682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.751 [2024-07-12 06:42:10.668710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.009 [2024-07-12 06:42:10.673087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.009 [2024-07-12 06:42:10.673120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.009 [2024-07-12 06:42:10.673147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.009 [2024-07-12 06:42:10.677466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.009 [2024-07-12 06:42:10.677515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.009 [2024-07-12 06:42:10.677543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.009 [2024-07-12 06:42:10.681582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.009 [2024-07-12 06:42:10.681617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.009 [2024-07-12 06:42:10.681644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.009 [2024-07-12 06:42:10.685729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.009 [2024-07-12 06:42:10.685764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.009 [2024-07-12 06:42:10.685791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.009 [2024-07-12 06:42:10.689921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.009 [2024-07-12 06:42:10.689981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.009 [2024-07-12 06:42:10.690010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.009 [2024-07-12 06:42:10.693979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.009 [2024-07-12 06:42:10.694021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.009 [2024-07-12 06:42:10.694048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.009 [2024-07-12 06:42:10.698003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.009 [2024-07-12 06:42:10.698036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.009 [2024-07-12 06:42:10.698063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.701960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.702020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.702048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.706008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.706041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.706068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.710045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.710079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.710105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.714002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.714035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.714062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.717951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.718011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.718024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.721957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.722016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.722045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.725968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.726009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.726036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.729947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.730008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.730037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.733972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.734016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.734043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.737958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.738018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.738047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.742195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.742227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.742238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.746551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.746586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.746636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.751300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.751350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.751392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.755761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.755789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.755815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.760219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.760252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.760266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.764620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.764651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.764662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.768878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.768909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.768920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.773147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.773423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.773615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.777819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.778070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.778290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.782668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.782832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.782850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.786990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.787032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.787044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.791035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.791065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.791077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.795192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.795222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.795234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.799224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.799255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.799267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.803210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.803240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.803252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.807486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.807517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.807529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.811506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.811538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.811549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.815504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.815537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.815548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.819658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.819739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.819752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.823722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.823772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.823783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.827745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.827794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.827805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.831896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.831944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.831983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.836317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.836380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.836392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.840348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.840411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.840422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.844384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.844432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.844444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.848635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.848683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.848695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.852760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.852808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.852820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.856859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.856908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.856920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.861191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.861237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.861249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.865201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.865249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.865276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.869515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.869567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.869579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.873995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.874038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.874052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.878671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.878708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.878721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.883246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.883295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.883307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.887963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.888022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.888047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.892510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.892558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.892569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.897197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.897248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.897261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.902037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.902082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.902121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.906589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.906661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.906675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.911020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.911081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.911109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.915363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.915412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.915439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.919654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.919703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.919732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.923863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.923913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.923940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.010 [2024-07-12 06:42:10.928053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.010 [2024-07-12 06:42:10.928114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.010 [2024-07-12 06:42:10.928142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.932665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.932715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.932741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.937141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.937190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.937217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.941197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.941247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.941288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.945340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.945405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.945432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.949383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.949433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.949461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.953523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.953584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.953612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.957604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.957669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.957696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.961772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.961822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.961849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.965786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.965836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.965863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.969898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.969947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.969985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.973827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.973876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.973902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.977908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.977981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.269 [2024-07-12 06:42:10.977994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.269 [2024-07-12 06:42:10.982169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.269 [2024-07-12 06:42:10.982218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:10.982244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:10.986203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:10.986252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:10.986278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:10.990204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:10.990269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:10.990296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:10.994399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:10.994449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:10.994477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:10.998552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:10.998602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:10.998654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.003054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.003119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.003131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.007502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.007555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.007567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.011862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.011912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.011939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.016160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.016209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.016237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.020463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.020515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.020543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.024843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.024892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.024919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.029181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.029232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.029259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.033597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.033663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.033690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.038037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.038098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.038125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.043086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.043128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.043142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.048057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.048109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.048123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.052639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.052686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.052701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.057240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.057325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.057340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.061754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.061807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.061834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.066280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.066350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.066362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.070761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.070801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.070814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.075292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.075329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.075341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.079689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.079739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.079766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.084028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.270 [2024-07-12 06:42:11.084077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.270 [2024-07-12 06:42:11.084103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.270 [2024-07-12 06:42:11.088484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.088537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.088565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.092868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.092918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.092946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.097257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.097341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.097383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.101703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.101753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.101780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.106119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.106169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.106195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.110448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.110501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.110529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.114845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.114885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.114898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.119383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.119435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.119448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.123844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.123893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.123920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.128253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.128335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.128362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.132906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.132981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.133010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.137313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.137367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.137410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.141930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.141993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.142022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.146337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.146389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.146416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.150958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.151065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.151093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.155582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.155649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.155678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.160194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.160229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.160255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.164788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.164836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.164863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.169129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.271 [2024-07-12 06:42:11.169179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.271 [2024-07-12 06:42:11.169206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.271 [2024-07-12 06:42:11.173585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.272 [2024-07-12 06:42:11.173652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.272 [2024-07-12 06:42:11.173679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.272 [2024-07-12 06:42:11.177986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.272 [2024-07-12 06:42:11.178062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.272 [2024-07-12 06:42:11.178090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.272 [2024-07-12 06:42:11.182395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.272 [2024-07-12 06:42:11.182446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.272 [2024-07-12 06:42:11.182474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.272 [2024-07-12 06:42:11.186760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.272 [2024-07-12 06:42:11.186800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.272 [2024-07-12 06:42:11.186813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.531 [2024-07-12 06:42:11.191709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.531 [2024-07-12 06:42:11.191774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.531 [2024-07-12 06:42:11.191802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.531 [2024-07-12 06:42:11.196440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.531 [2024-07-12 06:42:11.196478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.531 [2024-07-12 06:42:11.196507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.531 [2024-07-12 06:42:11.200911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.531 [2024-07-12 06:42:11.200986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.531 [2024-07-12 06:42:11.201014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.531 [2024-07-12 06:42:11.205412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.531 [2024-07-12 06:42:11.205464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.531 [2024-07-12 06:42:11.205492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.531 [2024-07-12 06:42:11.209895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.531 [2024-07-12 06:42:11.209944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.531 [2024-07-12 06:42:11.209983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.531 [2024-07-12 06:42:11.214594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.531 [2024-07-12 06:42:11.214657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.531 [2024-07-12 06:42:11.214671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.531 [2024-07-12 06:42:11.219196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.531 [2024-07-12 06:42:11.219244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.531 [2024-07-12 06:42:11.219271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.531 [2024-07-12 06:42:11.223740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.223790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.223817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.228138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.228186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.228213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.232633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.232714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.232740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.237062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.237111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.237138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.241414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.241466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.241495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.245860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.245910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.245936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.250208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.250257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.250300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.254395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.254446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.254474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.258737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.258776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.258790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.263211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.263275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.263303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.267696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.267746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.267773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.272057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.272105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.272133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.276429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.276487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.276515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.280913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.280961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.280998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.285240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.285306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.285334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.289576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.289642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.289670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.294025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.294085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.294112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.298416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.298468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.298496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.302807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.302846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.302859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.307343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.307395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.307423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.311797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.311846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.311874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.316209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.316257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.316300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.320778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.320827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.320854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.325233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.325299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.325328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.329602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.329668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.329679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.334035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.334094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.334121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.532 [2024-07-12 06:42:11.338275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.532 [2024-07-12 06:42:11.338342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.532 [2024-07-12 06:42:11.338371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.342593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.342655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.342669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.347075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.347123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.347150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.351436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.351488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.351517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.355814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.355864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.355891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.360163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.360213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.360240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.364533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.364594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.364623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.369096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.369157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.369186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.373511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.373564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.373607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.378169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.378202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.378229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.382740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.382778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.382792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.387343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.387398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.387412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.391998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.392059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.392087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.396782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.396840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.396869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.401454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.401507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.401521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.406099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.406148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.406175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.410504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.410559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.410572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.415117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.415164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.415191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.419583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.419651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.419697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.424049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.424096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.424123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.428496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.428551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.428565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.433101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.433149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.433176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.437426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.437479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.437493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.441983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.442043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.442070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.446257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.446342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.446356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.533 [2024-07-12 06:42:11.451048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.533 [2024-07-12 06:42:11.451084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.533 [2024-07-12 06:42:11.451111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.455709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.455759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.455786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.460323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.460376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.460390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.464793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.464843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.464870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.469410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.469464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.469478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.473874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.473924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.473951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.478195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.478228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.478254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.482660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.482699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.482712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.487201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.487249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.487276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.491783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.491832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.491859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.496201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.496249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.496291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.500604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.500671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.500697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.505074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.505122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.505148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.509467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.509521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.509534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.513995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.514055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.514082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.518279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.518348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.518361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.522724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.522762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.522776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.527315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.527368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.527381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.531727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.531761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.531788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.536223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.536287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.536301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.540724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.540774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.540800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.545157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.545205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.545232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.549582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.549651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.549683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.554113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.554161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.554188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.558525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.558578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.558591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.563026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.563085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.563113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.567514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.567568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.567582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.572020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.572079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.572106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.576429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.576482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.576496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.581156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.581207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.581234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.585764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.585814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.585841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.590180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.590229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.590256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.594593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.594658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.594672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.599140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.599187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.599214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.603751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.603800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.603827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.608123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.608171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.608198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.612631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.795 [2024-07-12 06:42:11.612714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.795 [2024-07-12 06:42:11.612740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.795 [2024-07-12 06:42:11.617110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.617159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.617202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.621600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.621701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.621728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.626241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.626309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.626323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.630816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.630856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.630868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.635460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.635499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.635512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.640089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.640138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.640165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.644539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.644594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.644608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.649006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.649064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.649091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.653491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.653546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.653559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.658029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.658087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.658114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.662475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.662529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.662543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.666873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.666926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.666967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.671411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.671464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.671478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.676023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.676081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.676108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.680509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.680564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.680577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.685084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.685132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.685159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.689521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.689576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.689590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.693912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.693985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.694013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.698219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.698269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.698298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.702524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.702589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.702627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.707059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.707107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.707133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.796 [2024-07-12 06:42:11.711648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:31.796 [2024-07-12 06:42:11.711720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.796 [2024-07-12 06:42:11.711762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.076 [2024-07-12 06:42:11.716457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:32.076 [2024-07-12 06:42:11.716496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.076 [2024-07-12 06:42:11.716509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.076 [2024-07-12 06:42:11.721143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:32.076 [2024-07-12 06:42:11.721183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.077 [2024-07-12 06:42:11.721196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.077 [2024-07-12 06:42:11.725794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:32.077 [2024-07-12 06:42:11.725848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.077 [2024-07-12 06:42:11.725862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.077 [2024-07-12 06:42:11.730593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:32.077 [2024-07-12 06:42:11.730640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.077 [2024-07-12 06:42:11.730653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.077 [2024-07-12 06:42:11.735166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:32.077 [2024-07-12 06:42:11.735216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.077 [2024-07-12 06:42:11.735243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.077 [2024-07-12 06:42:11.739783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:32.077 [2024-07-12 06:42:11.739833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.077 [2024-07-12 06:42:11.739860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.077 [2024-07-12 06:42:11.744347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:32.077 [2024-07-12 06:42:11.744401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.077 [2024-07-12 06:42:11.744415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.077 [2024-07-12 06:42:11.748917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:32.077 [2024-07-12 06:42:11.748989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.077 [2024-07-12 06:42:11.749001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.077 [2024-07-12 06:42:11.753445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:32.077 [2024-07-12 06:42:11.753499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.077 [2024-07-12 06:42:11.753512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.077 [2024-07-12 06:42:11.758031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xec3050) 00:17:32.077 [2024-07-12 06:42:11.758090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.077 [2024-07-12 06:42:11.758117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.077 00:17:32.077 Latency(us) 00:17:32.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.077 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:32.077 nvme0n1 : 2.00 7082.73 885.34 0.00 0.00 2256.07 1779.90 10962.39 00:17:32.077 =================================================================================================================== 00:17:32.077 Total : 7082.73 885.34 0.00 0.00 2256.07 1779.90 10962.39 00:17:32.077 0 00:17:32.077 06:42:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:32.077 06:42:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:32.077 06:42:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:32.077 | .driver_specific 00:17:32.077 | .nvme_error 00:17:32.077 | .status_code 00:17:32.077 | .command_transient_transport_error' 00:17:32.077 06:42:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:32.350 06:42:12 -- host/digest.sh@71 -- # (( 457 > 0 )) 00:17:32.350 06:42:12 -- host/digest.sh@73 -- # killprocess 83525 00:17:32.350 06:42:12 -- common/autotest_common.sh@926 -- # '[' -z 83525 ']' 00:17:32.350 06:42:12 -- common/autotest_common.sh@930 -- # kill -0 83525 00:17:32.350 06:42:12 -- common/autotest_common.sh@931 -- # uname 00:17:32.350 06:42:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.350 06:42:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83525 00:17:32.350 06:42:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:32.350 06:42:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:32.350 killing process with pid 83525 00:17:32.350 06:42:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83525' 00:17:32.350 Received shutdown signal, test time was about 2.000000 seconds 00:17:32.350 00:17:32.350 Latency(us) 00:17:32.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.350 =================================================================================================================== 00:17:32.350 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.350 06:42:12 -- common/autotest_common.sh@945 -- # kill 83525 00:17:32.350 06:42:12 -- common/autotest_common.sh@950 -- # wait 83525 00:17:32.350 06:42:12 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:17:32.350 06:42:12 -- host/digest.sh@54 -- # local rw bs qd 00:17:32.350 06:42:12 -- host/digest.sh@56 -- # rw=randwrite 00:17:32.350 06:42:12 -- host/digest.sh@56 -- # bs=4096 00:17:32.350 06:42:12 -- host/digest.sh@56 -- # qd=128 00:17:32.350 06:42:12 -- host/digest.sh@58 -- # bperfpid=83578 00:17:32.350 06:42:12 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:32.350 06:42:12 -- host/digest.sh@60 -- # waitforlisten 83578 /var/tmp/bperf.sock 00:17:32.350 06:42:12 -- common/autotest_common.sh@819 -- # '[' -z 83578 ']' 00:17:32.350 06:42:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:32.350 06:42:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:32.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:32.350 06:42:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:32.350 06:42:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:32.350 06:42:12 -- common/autotest_common.sh@10 -- # set +x 00:17:32.609 [2024-07-12 06:42:12.286109] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:32.609 [2024-07-12 06:42:12.286218] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83578 ] 00:17:32.609 [2024-07-12 06:42:12.426411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.609 [2024-07-12 06:42:12.462071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.868 06:42:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:32.868 06:42:12 -- common/autotest_common.sh@852 -- # return 0 00:17:32.868 06:42:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:32.868 06:42:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:32.868 06:42:12 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:32.868 06:42:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:32.868 06:42:12 -- common/autotest_common.sh@10 -- # set +x 00:17:33.127 06:42:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:33.127 06:42:12 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.127 06:42:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.386 nvme0n1 00:17:33.386 06:42:13 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:33.386 06:42:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:33.386 06:42:13 -- common/autotest_common.sh@10 -- # set +x 00:17:33.386 06:42:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:33.386 06:42:13 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:33.386 06:42:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:33.386 Running I/O for 2 seconds... 00:17:33.386 [2024-07-12 06:42:13.213265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ddc00 00:17:33.386 [2024-07-12 06:42:13.214759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.386 [2024-07-12 06:42:13.214808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.386 [2024-07-12 06:42:13.228653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fef90 00:17:33.386 [2024-07-12 06:42:13.230064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.386 [2024-07-12 06:42:13.230127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.386 [2024-07-12 06:42:13.243905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ff3c8 00:17:33.386 [2024-07-12 06:42:13.245255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.386 [2024-07-12 06:42:13.245311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:33.386 [2024-07-12 06:42:13.258399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190feb58 00:17:33.386 [2024-07-12 06:42:13.259752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.386 [2024-07-12 06:42:13.259809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:33.386 [2024-07-12 06:42:13.272745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fe720 00:17:33.386 [2024-07-12 06:42:13.274111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.386 [2024-07-12 06:42:13.274164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:33.386 [2024-07-12 06:42:13.287386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fe2e8 00:17:33.386 [2024-07-12 06:42:13.288699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.386 [2024-07-12 06:42:13.288752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:33.386 [2024-07-12 06:42:13.301585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fdeb0 00:17:33.386 [2024-07-12 06:42:13.302862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.386 [2024-07-12 06:42:13.302912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.316860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fda78 00:17:33.645 [2024-07-12 06:42:13.318164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.645 [2024-07-12 06:42:13.318219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.330749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fd640 00:17:33.645 [2024-07-12 06:42:13.331961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.645 [2024-07-12 06:42:13.332035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.344770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fd208 00:17:33.645 [2024-07-12 06:42:13.346117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.645 [2024-07-12 06:42:13.346171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.359540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fcdd0 00:17:33.645 [2024-07-12 06:42:13.360888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.645 [2024-07-12 06:42:13.360938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.375484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fc998 00:17:33.645 [2024-07-12 06:42:13.376807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.645 [2024-07-12 06:42:13.376855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.390389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fc560 00:17:33.645 [2024-07-12 06:42:13.391814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.645 [2024-07-12 06:42:13.391879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.405718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fc128 00:17:33.645 [2024-07-12 06:42:13.407136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.645 [2024-07-12 06:42:13.407184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.421391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fbcf0 00:17:33.645 [2024-07-12 06:42:13.422792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.645 [2024-07-12 06:42:13.422829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.438393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fb8b8 00:17:33.645 [2024-07-12 06:42:13.439712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.645 [2024-07-12 06:42:13.439772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.454800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fb480 00:17:33.645 [2024-07-12 06:42:13.456074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.645 [2024-07-12 06:42:13.456149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.469598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fb048 00:17:33.645 [2024-07-12 06:42:13.470819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.645 [2024-07-12 06:42:13.470871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:33.645 [2024-07-12 06:42:13.484391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fac10 00:17:33.646 [2024-07-12 06:42:13.485583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.646 [2024-07-12 06:42:13.485630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:33.646 [2024-07-12 06:42:13.499093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fa7d8 00:17:33.646 [2024-07-12 06:42:13.500237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.646 [2024-07-12 06:42:13.500286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:33.646 [2024-07-12 06:42:13.513676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190fa3a0 00:17:33.646 [2024-07-12 06:42:13.514903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.646 [2024-07-12 06:42:13.514992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:33.646 [2024-07-12 06:42:13.527500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f9f68 00:17:33.646 [2024-07-12 06:42:13.528604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.646 [2024-07-12 06:42:13.528651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:33.646 [2024-07-12 06:42:13.541531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f9b30 00:17:33.646 [2024-07-12 06:42:13.542707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.646 [2024-07-12 06:42:13.542758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:33.646 [2024-07-12 06:42:13.555656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f96f8 00:17:33.646 [2024-07-12 06:42:13.556734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.646 [2024-07-12 06:42:13.556782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.570266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f92c0 00:17:33.905 [2024-07-12 06:42:13.571501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.571534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.584414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f8e88 00:17:33.905 [2024-07-12 06:42:13.585524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.585571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.598471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f8a50 00:17:33.905 [2024-07-12 06:42:13.599584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.599617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.612895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f8618 00:17:33.905 [2024-07-12 06:42:13.613998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.614051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.626731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f81e0 00:17:33.905 [2024-07-12 06:42:13.627768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.627800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.640551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f7da8 00:17:33.905 [2024-07-12 06:42:13.641586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.641633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.654385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f7970 00:17:33.905 [2024-07-12 06:42:13.655461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.655507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.668415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f7538 00:17:33.905 [2024-07-12 06:42:13.669457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.669503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.682261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f7100 00:17:33.905 [2024-07-12 06:42:13.683352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.683401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.696158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f6cc8 00:17:33.905 [2024-07-12 06:42:13.697152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.697201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.709773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f6890 00:17:33.905 [2024-07-12 06:42:13.710801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.710835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.723451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f6458 00:17:33.905 [2024-07-12 06:42:13.724438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.724485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.737241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f6020 00:17:33.905 [2024-07-12 06:42:13.738216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.905 [2024-07-12 06:42:13.738263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:33.905 [2024-07-12 06:42:13.750812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f5be8 00:17:33.906 [2024-07-12 06:42:13.751802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.906 [2024-07-12 06:42:13.751849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:33.906 [2024-07-12 06:42:13.764731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f57b0 00:17:33.906 [2024-07-12 06:42:13.765741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.906 [2024-07-12 06:42:13.765787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:33.906 [2024-07-12 06:42:13.778654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f5378 00:17:33.906 [2024-07-12 06:42:13.779612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.906 [2024-07-12 06:42:13.779659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:33.906 [2024-07-12 06:42:13.792543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f4f40 00:17:33.906 [2024-07-12 06:42:13.793529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.906 [2024-07-12 06:42:13.793576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:33.906 [2024-07-12 06:42:13.806401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f4b08 00:17:33.906 [2024-07-12 06:42:13.807374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.906 [2024-07-12 06:42:13.807420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:33.906 [2024-07-12 06:42:13.820152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f46d0 00:17:33.906 [2024-07-12 06:42:13.821083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.906 [2024-07-12 06:42:13.821136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.835191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f4298 00:17:34.165 [2024-07-12 06:42:13.836130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.836177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.848908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f3e60 00:17:34.165 [2024-07-12 06:42:13.849811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.849857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.862698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f3a28 00:17:34.165 [2024-07-12 06:42:13.863629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.863674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.876663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f35f0 00:17:34.165 [2024-07-12 06:42:13.877537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.877570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.890573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f31b8 00:17:34.165 [2024-07-12 06:42:13.891463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.891510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.904220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f2d80 00:17:34.165 [2024-07-12 06:42:13.905086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.905120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.917927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f2948 00:17:34.165 [2024-07-12 06:42:13.918818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.918887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.932415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f2510 00:17:34.165 [2024-07-12 06:42:13.933350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.933400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.948460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f20d8 00:17:34.165 [2024-07-12 06:42:13.949362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.949408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.964648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f1ca0 00:17:34.165 [2024-07-12 06:42:13.965585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.965634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.978735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f1868 00:17:34.165 [2024-07-12 06:42:13.979587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.979621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:34.165 [2024-07-12 06:42:13.992659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f1430 00:17:34.165 [2024-07-12 06:42:13.993460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.165 [2024-07-12 06:42:13.993492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:34.166 [2024-07-12 06:42:14.006200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f0ff8 00:17:34.166 [2024-07-12 06:42:14.007025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.166 [2024-07-12 06:42:14.007073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:34.166 [2024-07-12 06:42:14.019737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f0bc0 00:17:34.166 [2024-07-12 06:42:14.020521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.166 [2024-07-12 06:42:14.020553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:34.166 [2024-07-12 06:42:14.033413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f0788 00:17:34.166 [2024-07-12 06:42:14.034222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.166 [2024-07-12 06:42:14.034254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:34.166 [2024-07-12 06:42:14.046980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190f0350 00:17:34.166 [2024-07-12 06:42:14.047772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.166 [2024-07-12 06:42:14.047804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:34.166 [2024-07-12 06:42:14.060808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190eff18 00:17:34.166 [2024-07-12 06:42:14.061573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.166 [2024-07-12 06:42:14.061605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:34.166 [2024-07-12 06:42:14.074507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190efae0 00:17:34.166 [2024-07-12 06:42:14.075298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.166 [2024-07-12 06:42:14.075346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:34.425 [2024-07-12 06:42:14.089309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ef6a8 00:17:34.425 [2024-07-12 06:42:14.090164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.425 [2024-07-12 06:42:14.090211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:34.425 [2024-07-12 06:42:14.103236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ef270 00:17:34.425 [2024-07-12 06:42:14.103978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.425 [2024-07-12 06:42:14.104020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:34.425 [2024-07-12 06:42:14.117002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190eee38 00:17:34.425 [2024-07-12 06:42:14.117710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.425 [2024-07-12 06:42:14.117743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:34.425 [2024-07-12 06:42:14.130691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190eea00 00:17:34.425 [2024-07-12 06:42:14.131451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.425 [2024-07-12 06:42:14.131485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.425 [2024-07-12 06:42:14.144584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ee5c8 00:17:34.426 [2024-07-12 06:42:14.145313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.145346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.158405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ee190 00:17:34.426 [2024-07-12 06:42:14.159150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.159182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.172183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190edd58 00:17:34.426 [2024-07-12 06:42:14.172911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.172944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.186132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ed920 00:17:34.426 [2024-07-12 06:42:14.186814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.186847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.200222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ed4e8 00:17:34.426 [2024-07-12 06:42:14.200900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.200931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.214029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ed0b0 00:17:34.426 [2024-07-12 06:42:14.214773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.214805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.228783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ecc78 00:17:34.426 [2024-07-12 06:42:14.229429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.229477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.242507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ec840 00:17:34.426 [2024-07-12 06:42:14.243235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.243273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.258108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ec408 00:17:34.426 [2024-07-12 06:42:14.258860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.258898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.273450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ebfd0 00:17:34.426 [2024-07-12 06:42:14.274093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.274136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.288201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ebb98 00:17:34.426 [2024-07-12 06:42:14.288865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.288912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.303089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190eb760 00:17:34.426 [2024-07-12 06:42:14.303683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.303730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.317620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190eb328 00:17:34.426 [2024-07-12 06:42:14.318205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.318257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.332124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190eaef0 00:17:34.426 [2024-07-12 06:42:14.332683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.426 [2024-07-12 06:42:14.332713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:34.426 [2024-07-12 06:42:14.346899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190eaab8 00:17:34.686 [2024-07-12 06:42:14.347595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.347641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.361910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ea680 00:17:34.686 [2024-07-12 06:42:14.362473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.362516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.376394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190ea248 00:17:34.686 [2024-07-12 06:42:14.376936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.377002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.390887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e9e10 00:17:34.686 [2024-07-12 06:42:14.391430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.391473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.405286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e99d8 00:17:34.686 [2024-07-12 06:42:14.405788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.405824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.419437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e95a0 00:17:34.686 [2024-07-12 06:42:14.419930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.419965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.433311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e9168 00:17:34.686 [2024-07-12 06:42:14.433833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.433862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.449068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e8d30 00:17:34.686 [2024-07-12 06:42:14.449585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.449616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.465251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e88f8 00:17:34.686 [2024-07-12 06:42:14.465782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.465812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.480428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e84c0 00:17:34.686 [2024-07-12 06:42:14.480950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.481002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.495222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e8088 00:17:34.686 [2024-07-12 06:42:14.495675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.495704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.509218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e7c50 00:17:34.686 [2024-07-12 06:42:14.509729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.509760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.525577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e7818 00:17:34.686 [2024-07-12 06:42:14.526073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.526127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.541177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e73e0 00:17:34.686 [2024-07-12 06:42:14.541675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.541707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.555621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e6fa8 00:17:34.686 [2024-07-12 06:42:14.556052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.556082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.569931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e6b70 00:17:34.686 [2024-07-12 06:42:14.570356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.570386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.584234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e6738 00:17:34.686 [2024-07-12 06:42:14.584641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.584670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:34.686 [2024-07-12 06:42:14.598561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e6300 00:17:34.686 [2024-07-12 06:42:14.598995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.686 [2024-07-12 06:42:14.599032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.946 [2024-07-12 06:42:14.614006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e5ec8 00:17:34.946 [2024-07-12 06:42:14.614390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.946 [2024-07-12 06:42:14.614419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:34.946 [2024-07-12 06:42:14.628530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e5a90 00:17:34.946 [2024-07-12 06:42:14.628924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.946 [2024-07-12 06:42:14.628978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:34.946 [2024-07-12 06:42:14.642850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e5658 00:17:34.946 [2024-07-12 06:42:14.643232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.946 [2024-07-12 06:42:14.643262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:34.946 [2024-07-12 06:42:14.657556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e5220 00:17:34.946 [2024-07-12 06:42:14.657953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.946 [2024-07-12 06:42:14.657991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:34.946 [2024-07-12 06:42:14.671911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e4de8 00:17:34.946 [2024-07-12 06:42:14.672248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.946 [2024-07-12 06:42:14.672278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:34.946 [2024-07-12 06:42:14.685659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e49b0 00:17:34.946 [2024-07-12 06:42:14.685990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.946 [2024-07-12 06:42:14.686029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:34.946 [2024-07-12 06:42:14.699653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e4578 00:17:34.946 [2024-07-12 06:42:14.699977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.946 [2024-07-12 06:42:14.700013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:34.946 [2024-07-12 06:42:14.713352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e4140 00:17:34.946 [2024-07-12 06:42:14.713679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.946 [2024-07-12 06:42:14.713712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:34.946 [2024-07-12 06:42:14.727056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e3d08 00:17:34.946 [2024-07-12 06:42:14.727360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.946 [2024-07-12 06:42:14.727390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:34.946 [2024-07-12 06:42:14.740690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e38d0 00:17:34.946 [2024-07-12 06:42:14.740988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.946 [2024-07-12 06:42:14.741025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:34.946 [2024-07-12 06:42:14.754275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e3498 00:17:34.947 [2024-07-12 06:42:14.754543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.947 [2024-07-12 06:42:14.754580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:34.947 [2024-07-12 06:42:14.768138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e3060 00:17:34.947 [2024-07-12 06:42:14.768381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.947 [2024-07-12 06:42:14.768445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:34.947 [2024-07-12 06:42:14.781730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e2c28 00:17:34.947 [2024-07-12 06:42:14.781985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.947 [2024-07-12 06:42:14.782037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:34.947 [2024-07-12 06:42:14.795386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e27f0 00:17:34.947 [2024-07-12 06:42:14.795619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.947 [2024-07-12 06:42:14.795656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:34.947 [2024-07-12 06:42:14.809194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e23b8 00:17:34.947 [2024-07-12 06:42:14.809432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.947 [2024-07-12 06:42:14.809471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:34.947 [2024-07-12 06:42:14.822866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e1f80 00:17:34.947 [2024-07-12 06:42:14.823085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.947 [2024-07-12 06:42:14.823105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:34.947 [2024-07-12 06:42:14.836504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e1b48 00:17:34.947 [2024-07-12 06:42:14.836706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.947 [2024-07-12 06:42:14.836725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:34.947 [2024-07-12 06:42:14.850302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e1710 00:17:34.947 [2024-07-12 06:42:14.850498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.947 [2024-07-12 06:42:14.850518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:34.947 [2024-07-12 06:42:14.864114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e12d8 00:17:34.947 [2024-07-12 06:42:14.864320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.947 [2024-07-12 06:42:14.864341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:35.206 [2024-07-12 06:42:14.879153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e0ea0 00:17:35.206 [2024-07-12 06:42:14.879331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.206 [2024-07-12 06:42:14.879351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:35.206 [2024-07-12 06:42:14.893658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e0a68 00:17:35.206 [2024-07-12 06:42:14.893827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.206 [2024-07-12 06:42:14.893847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:35.206 [2024-07-12 06:42:14.908324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e0630 00:17:35.206 [2024-07-12 06:42:14.908484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:14.908505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:14.922111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e01f8 00:17:35.207 [2024-07-12 06:42:14.922261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:14.922282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:14.935826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190dfdc0 00:17:35.207 [2024-07-12 06:42:14.935970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:14.935999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:14.950160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190df988 00:17:35.207 [2024-07-12 06:42:14.950301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:14.950338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:14.966212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190df550 00:17:35.207 [2024-07-12 06:42:14.966357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:14.966407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:14.981680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190df118 00:17:35.207 [2024-07-12 06:42:14.981801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:14.981837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:14.996545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190dece0 00:17:35.207 [2024-07-12 06:42:14.996654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:14.996675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:15.010273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190de8a8 00:17:35.207 [2024-07-12 06:42:15.010379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:15.010399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:15.024015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190de038 00:17:35.207 [2024-07-12 06:42:15.024128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:15.024149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:15.043103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190de038 00:17:35.207 [2024-07-12 06:42:15.044370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:15.044417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:15.056766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190de470 00:17:35.207 [2024-07-12 06:42:15.058104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:15.058137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:15.070728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190de8a8 00:17:35.207 [2024-07-12 06:42:15.072095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:15.072156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:15.084780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190dece0 00:17:35.207 [2024-07-12 06:42:15.086042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:15.086113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:15.098526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190df118 00:17:35.207 [2024-07-12 06:42:15.099820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:15.099852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:15.112426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190df550 00:17:35.207 [2024-07-12 06:42:15.113705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:15.113751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:35.207 [2024-07-12 06:42:15.127083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190df988 00:17:35.207 [2024-07-12 06:42:15.128450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.207 [2024-07-12 06:42:15.128500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:35.466 [2024-07-12 06:42:15.141656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190dfdc0 00:17:35.466 [2024-07-12 06:42:15.143012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.466 [2024-07-12 06:42:15.143052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:35.466 [2024-07-12 06:42:15.155487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e01f8 00:17:35.466 [2024-07-12 06:42:15.156693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.466 [2024-07-12 06:42:15.156739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:35.466 [2024-07-12 06:42:15.169581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e0630 00:17:35.466 [2024-07-12 06:42:15.170937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.466 [2024-07-12 06:42:15.171042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:35.466 [2024-07-12 06:42:15.183750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78da90) with pdu=0x2000190e0a68 00:17:35.466 [2024-07-12 06:42:15.184976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.466 [2024-07-12 06:42:15.185048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:35.466 00:17:35.466 Latency(us) 00:17:35.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.466 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.466 nvme0n1 : 2.00 17691.51 69.11 0.00 0.00 7229.37 6374.87 22997.18 00:17:35.466 =================================================================================================================== 00:17:35.466 Total : 17691.51 69.11 0.00 0.00 7229.37 6374.87 22997.18 00:17:35.466 0 00:17:35.466 06:42:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:35.466 06:42:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:35.466 06:42:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:35.466 | .driver_specific 00:17:35.466 | .nvme_error 00:17:35.466 | .status_code 00:17:35.466 | .command_transient_transport_error' 00:17:35.466 06:42:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:35.725 06:42:15 -- host/digest.sh@71 -- # (( 138 > 0 )) 00:17:35.725 06:42:15 -- host/digest.sh@73 -- # killprocess 83578 00:17:35.725 06:42:15 -- common/autotest_common.sh@926 -- # '[' -z 83578 ']' 00:17:35.725 06:42:15 -- common/autotest_common.sh@930 -- # kill -0 83578 00:17:35.725 06:42:15 -- common/autotest_common.sh@931 -- # uname 00:17:35.725 06:42:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:35.725 06:42:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83578 00:17:35.725 06:42:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:35.725 killing process with pid 83578 00:17:35.725 Received shutdown signal, test time was about 2.000000 seconds 00:17:35.725 00:17:35.725 Latency(us) 00:17:35.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.725 =================================================================================================================== 00:17:35.725 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.725 06:42:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:35.725 06:42:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83578' 00:17:35.725 06:42:15 -- common/autotest_common.sh@945 -- # kill 83578 00:17:35.725 06:42:15 -- common/autotest_common.sh@950 -- # wait 83578 00:17:35.725 06:42:15 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:17:35.725 06:42:15 -- host/digest.sh@54 -- # local rw bs qd 00:17:35.725 06:42:15 -- host/digest.sh@56 -- # rw=randwrite 00:17:35.725 06:42:15 -- host/digest.sh@56 -- # bs=131072 00:17:35.725 06:42:15 -- host/digest.sh@56 -- # qd=16 00:17:35.725 06:42:15 -- host/digest.sh@58 -- # bperfpid=83625 00:17:35.725 06:42:15 -- host/digest.sh@60 -- # waitforlisten 83625 /var/tmp/bperf.sock 00:17:35.725 06:42:15 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:35.725 06:42:15 -- common/autotest_common.sh@819 -- # '[' -z 83625 ']' 00:17:35.725 06:42:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:35.725 06:42:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:35.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:35.725 06:42:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:35.725 06:42:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:35.725 06:42:15 -- common/autotest_common.sh@10 -- # set +x 00:17:35.984 [2024-07-12 06:42:15.692066] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:35.984 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:35.984 Zero copy mechanism will not be used. 00:17:35.984 [2024-07-12 06:42:15.692163] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83625 ] 00:17:35.984 [2024-07-12 06:42:15.830064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.984 [2024-07-12 06:42:15.862177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.920 06:42:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:36.920 06:42:16 -- common/autotest_common.sh@852 -- # return 0 00:17:36.920 06:42:16 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:36.920 06:42:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:36.920 06:42:16 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:36.920 06:42:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.920 06:42:16 -- common/autotest_common.sh@10 -- # set +x 00:17:37.178 06:42:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:37.178 06:42:16 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.178 06:42:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.178 nvme0n1 00:17:37.437 06:42:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:37.437 06:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:37.437 06:42:17 -- common/autotest_common.sh@10 -- # set +x 00:17:37.437 06:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:37.437 06:42:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:37.437 06:42:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:37.437 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:37.437 Zero copy mechanism will not be used. 00:17:37.437 Running I/O for 2 seconds... 00:17:37.437 [2024-07-12 06:42:17.247412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.437 [2024-07-12 06:42:17.247777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.437 [2024-07-12 06:42:17.247806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.437 [2024-07-12 06:42:17.252658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.437 [2024-07-12 06:42:17.252970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.437 [2024-07-12 06:42:17.253011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.437 [2024-07-12 06:42:17.257356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.437 [2024-07-12 06:42:17.257667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.437 [2024-07-12 06:42:17.257700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.437 [2024-07-12 06:42:17.261809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.437 [2024-07-12 06:42:17.262129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.437 [2024-07-12 06:42:17.262161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.437 [2024-07-12 06:42:17.266305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.437 [2024-07-12 06:42:17.266624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.437 [2024-07-12 06:42:17.266671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.437 [2024-07-12 06:42:17.270923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.437 [2024-07-12 06:42:17.271217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.437 [2024-07-12 06:42:17.271249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.437 [2024-07-12 06:42:17.275456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.437 [2024-07-12 06:42:17.275764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.437 [2024-07-12 06:42:17.275796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.437 [2024-07-12 06:42:17.279957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.437 [2024-07-12 06:42:17.280279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.280310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.284485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.284809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.284841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.289023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.289335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.289366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.293590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.293898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.293929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.298340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.298661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.298692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.302907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.303213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.303244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.307441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.307752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.307784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.312045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.312354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.312385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.316565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.316874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.316906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.321240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.321547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.321578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.325730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.326050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.326098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.330310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.330607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.330646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.334831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.335123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.335154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.339409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.339730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.339762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.344029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.344379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.344429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.349300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.349622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.349656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.438 [2024-07-12 06:42:17.354522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.438 [2024-07-12 06:42:17.354866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.438 [2024-07-12 06:42:17.354900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.698 [2024-07-12 06:42:17.360018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.360459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.360492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.365209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.365524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.365557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.369877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.370215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.370248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.374567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.374914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.374946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.379344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.379661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.379693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.383984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.384327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.384374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.388606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.388930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.388970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.393199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.393524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.393555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.397948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.398298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.398329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.402580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.402924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.402966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.407261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.407585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.407618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.411898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.412253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.412284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.416715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.417061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.417093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.421380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.421730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.421761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.426127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.426455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.426486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.430775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.431144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.431176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.435566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.435889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.435921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.440358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.440683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.440715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.445097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.445424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.445456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.449786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.450144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.450176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.454405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.454757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.454789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.459196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.459527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.459558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.463841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.464201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.464233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.468609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.468940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.468981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.473369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.473691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.473722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.478028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.478386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.478420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.482987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.483352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.483387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.488054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.488374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.488406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.699 [2024-07-12 06:42:17.493230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.699 [2024-07-12 06:42:17.493565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.699 [2024-07-12 06:42:17.493599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.498606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.498980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.499015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.503852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.504216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.504250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.509132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.509452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.509488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.514580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.514928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.514977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.519813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.520177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.520216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.525111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.525475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.525511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.530153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.530529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.530562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.535294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.535667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.535729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.540196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.540548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.540580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.544998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.545346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.545377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.549552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.549881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.549913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.554213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.554538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.554569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.558972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.559329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.559359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.563569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.563897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.563927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.568298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.568624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.568663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.572928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.573265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.573297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.577568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.577894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.577926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.582152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.582478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.582510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.586807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.587143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.587175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.591479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.591805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.591836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.596281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.596613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.596645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.600957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.601295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.601325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.605551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.605898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.605929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.610227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.610544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.610576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.700 [2024-07-12 06:42:17.614836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.700 [2024-07-12 06:42:17.615210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.700 [2024-07-12 06:42:17.615257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.620068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.620433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.620466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.625161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.625560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.625593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.630042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.630362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.630393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.634763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.635112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.635144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.639488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.639813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.639844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.644199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.644535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.644566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.648881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.649251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.649283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.653472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.653802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.653833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.658176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.658494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.658526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.662858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.663157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.663188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.667619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.667942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.667987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.672373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.672705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.672737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.677194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.677536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.677568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.682105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.961 [2024-07-12 06:42:17.682444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.961 [2024-07-12 06:42:17.682477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.961 [2024-07-12 06:42:17.687188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.687535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.687567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.692353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.692692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.692725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.697209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.697555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.697586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.701863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.702210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.702241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.706479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.706809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.706837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.711166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.711490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.711522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.715904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.716258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.716290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.720519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.720843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.720874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.725163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.725475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.725506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.730112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.730485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.730518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.735389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.735752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.735784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.740541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.740888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.740920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.745869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.746230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.746262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.751041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.751441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.751474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.756398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.756736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.756768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.761609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.761948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.762006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.766731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.767049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.767082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.772213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.772523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.772558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.777715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.778070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.778104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.783246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.783558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.783591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.788717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.789038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.789088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.793731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.794064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.794097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.799121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.799464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.799496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.804163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.804497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.804529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.808888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.809244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.809276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.813658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.814041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.814101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.818575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.818940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.818995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.823498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.823845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.823877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.828616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.962 [2024-07-12 06:42:17.828948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.962 [2024-07-12 06:42:17.828988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.962 [2024-07-12 06:42:17.833351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.963 [2024-07-12 06:42:17.833682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.963 [2024-07-12 06:42:17.833714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.963 [2024-07-12 06:42:17.838175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.963 [2024-07-12 06:42:17.838521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.963 [2024-07-12 06:42:17.838555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.963 [2024-07-12 06:42:17.843167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.963 [2024-07-12 06:42:17.843499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.963 [2024-07-12 06:42:17.843531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.963 [2024-07-12 06:42:17.847868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.963 [2024-07-12 06:42:17.848224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.963 [2024-07-12 06:42:17.848256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.963 [2024-07-12 06:42:17.852639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.963 [2024-07-12 06:42:17.852961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.963 [2024-07-12 06:42:17.853020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.963 [2024-07-12 06:42:17.857533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.963 [2024-07-12 06:42:17.857864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.963 [2024-07-12 06:42:17.857909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.963 [2024-07-12 06:42:17.862203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.963 [2024-07-12 06:42:17.862533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.963 [2024-07-12 06:42:17.862568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.963 [2024-07-12 06:42:17.867153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.963 [2024-07-12 06:42:17.867508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.963 [2024-07-12 06:42:17.867541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.963 [2024-07-12 06:42:17.872008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.963 [2024-07-12 06:42:17.872337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.963 [2024-07-12 06:42:17.872372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.963 [2024-07-12 06:42:17.876803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:37.963 [2024-07-12 06:42:17.877160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.963 [2024-07-12 06:42:17.877203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.963 [2024-07-12 06:42:17.882141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.882512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.882546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.887189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.887531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.887581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.892277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.892603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.892643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.897249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.897577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.897623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.902022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.902362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.902393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.906811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.907151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.907184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.911480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.911804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.911849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.916073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.916404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.916438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.920769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.921108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.921139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.925389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.925732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.925766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.930089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.930406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.930452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.934818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.935214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.935246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.939570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.939897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.939931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.944189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.944512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.944544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.948802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.949143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.949174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.953545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.953862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.953893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.958204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.958508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.958541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.963175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.963545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.963579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.968291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.968697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.968756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.973755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.974080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.974161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.979129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.979474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.224 [2024-07-12 06:42:17.979510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.224 [2024-07-12 06:42:17.984221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.224 [2024-07-12 06:42:17.984589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:17.984625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:17.989271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:17.989638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:17.989688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:17.994234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:17.994577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:17.994611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:17.999391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:17.999724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:17.999757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.004190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.004503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.004531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.009258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.009599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.009634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.014522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.014883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.014918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.019738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.020102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.020136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.025113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.025494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.025532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.030401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.030742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.030776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.035505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.035865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.035900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.040619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.040953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.041010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.045599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.045943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.045986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.050721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.051094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.051128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.055496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.055820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.055852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.060266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.060619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.060658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.065088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.065410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.065455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.069755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.070100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.070132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.074743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.075072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.075105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.079530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.079860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.079905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.084425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.084781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.084815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.089381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.089707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.089739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.094165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.094494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.094531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.099223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.099548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.099580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.103966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.104300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.104332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.108643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.108972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.109013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.113590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.113928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.113972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.118405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.118753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.118789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.123246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.123604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.225 [2024-07-12 06:42:18.123653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.225 [2024-07-12 06:42:18.128081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.225 [2024-07-12 06:42:18.128397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.226 [2024-07-12 06:42:18.128428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.226 [2024-07-12 06:42:18.132648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.226 [2024-07-12 06:42:18.132970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.226 [2024-07-12 06:42:18.133010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.226 [2024-07-12 06:42:18.137306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.226 [2024-07-12 06:42:18.137646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.226 [2024-07-12 06:42:18.137679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.226 [2024-07-12 06:42:18.142335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.226 [2024-07-12 06:42:18.142717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.226 [2024-07-12 06:42:18.142743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.147640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.147982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.148029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.152987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.153369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.153437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.157796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.158109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.158155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.162471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.162816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.162863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.167162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.167488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.167519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.171769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.172078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.172125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.176494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.176818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.176850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.181073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.181382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.181421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.185683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.185997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.186040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.190439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.190793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.190828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.195155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.195481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.195512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.199840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.486 [2024-07-12 06:42:18.200177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.486 [2024-07-12 06:42:18.200208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.486 [2024-07-12 06:42:18.204526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.204840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.204872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.209219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.209561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.209593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.213791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.214132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.214164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.218467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.218803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.218835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.223211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.223543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.223578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.227835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.228161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.228192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.232443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.232771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.232803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.237139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.237475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.237522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.241805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.242116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.242163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.246386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.246743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.246777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.251063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.251388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.251410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.255746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.256081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.256108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.260467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.260790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.260821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.265099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.265435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.265468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.269651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.269973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.270014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.274482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.274856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.274885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.279269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.279557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.279594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.283796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.284149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.284183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.288720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.289054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.289085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.293554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.293877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.293909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.298336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.298669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.298699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.302909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.303254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.303286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.307530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.307854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.307896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.312178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.312494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.312525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.316788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.317102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.317148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.321450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.321765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.321796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.326096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.326409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.326443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.330552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.330889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.330920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.335298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.335637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.487 [2024-07-12 06:42:18.335665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.487 [2024-07-12 06:42:18.340094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.487 [2024-07-12 06:42:18.340422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.340453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.344697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.345024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.345054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.349311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.349637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.349668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.353874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.354205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.354238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.358387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.358728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.358760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.363008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.363337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.363370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.367567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.367894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.367925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.372226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.372556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.372590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.376831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.377149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.377197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.381424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.381752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.381797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.386230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.386518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.386552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.390578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.390902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.390935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.395284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.395628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.395672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.399925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.400269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.400302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.488 [2024-07-12 06:42:18.404671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.488 [2024-07-12 06:42:18.405028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.488 [2024-07-12 06:42:18.405071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.410025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.410348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.410383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.415351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.415666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.415697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.419997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.420312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.420343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.424648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.424965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.425012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.429218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.429542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.429574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.433819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.434148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.434179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.438500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.438852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.438883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.443228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.443568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.443613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.447915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.448262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.448295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.452489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.452812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.452843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.457186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.457488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.457523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.461693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.462028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.462058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.466203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.466525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.466556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.470817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.471145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.471176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.475554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.475877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.475914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.480302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.480637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.480669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.485119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.485461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.485496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.489747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.490083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.490132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.494373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.494700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.494731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.499072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.499417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.499451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.503559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.503887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.503931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.508168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.508482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.508506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.512702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.749 [2024-07-12 06:42:18.513025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.749 [2024-07-12 06:42:18.513068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.749 [2024-07-12 06:42:18.517408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.517771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.517806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.522409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.522764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.522799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.527648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.528012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.528073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.532870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.533240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.533292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.538166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.538527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.538561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.543434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.543798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.543841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.548727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.549047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.549104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.553789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.554109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.554156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.558782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.559138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.559170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.563875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.564211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.564274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.568656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.568982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.569022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.573331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.573662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.573693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.577891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.578234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.578268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.582441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.582777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.582808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.587140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.587487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.587520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.591714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.592037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.592102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.596433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.596755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.596786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.601185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.601518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.601551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.605754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.606089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.606136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.610393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.610726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.610758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.614985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.615331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.615364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.619475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.619797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.619827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.624195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.624507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.624540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.628666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.628982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.629023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.633198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.633531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.633568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.637689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.638015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.638045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.642305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.642625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.642656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.646862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.647152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.647183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.651576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.651891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.750 [2024-07-12 06:42:18.651924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.750 [2024-07-12 06:42:18.656134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.750 [2024-07-12 06:42:18.656458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.751 [2024-07-12 06:42:18.656501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.751 [2024-07-12 06:42:18.660746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.751 [2024-07-12 06:42:18.661073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.751 [2024-07-12 06:42:18.661104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.751 [2024-07-12 06:42:18.665480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:38.751 [2024-07-12 06:42:18.665847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.751 [2024-07-12 06:42:18.665880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.011 [2024-07-12 06:42:18.670810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.011 [2024-07-12 06:42:18.671167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.011 [2024-07-12 06:42:18.671202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.011 [2024-07-12 06:42:18.675621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.011 [2024-07-12 06:42:18.676012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.011 [2024-07-12 06:42:18.676059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.011 [2024-07-12 06:42:18.680460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.011 [2024-07-12 06:42:18.680774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.011 [2024-07-12 06:42:18.680805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.011 [2024-07-12 06:42:18.685179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.011 [2024-07-12 06:42:18.685491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.011 [2024-07-12 06:42:18.685525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.011 [2024-07-12 06:42:18.689789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.011 [2024-07-12 06:42:18.690091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.011 [2024-07-12 06:42:18.690154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.011 [2024-07-12 06:42:18.694413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.011 [2024-07-12 06:42:18.694754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.011 [2024-07-12 06:42:18.694779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.011 [2024-07-12 06:42:18.699242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.011 [2024-07-12 06:42:18.699569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.011 [2024-07-12 06:42:18.699604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.011 [2024-07-12 06:42:18.703884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.011 [2024-07-12 06:42:18.704226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.011 [2024-07-12 06:42:18.704259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.011 [2024-07-12 06:42:18.708508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.708830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.708860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.713135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.713441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.713474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.718290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.718688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.718723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.723416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.723737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.723769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.728035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.728336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.728373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.732697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.733013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.733055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.737309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.737635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.737666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.741970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.742304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.742338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.746365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.746698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.746730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.751166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.751468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.751510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.755840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.756195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.756229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.760497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.760822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.760844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.765129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.765483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.765515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.769740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.770080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.770126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.774328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.774660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.774699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.778968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.779294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.779331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.783658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.783983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.784023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.788389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.788718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.788763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.792986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.793288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.793322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.797510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.797856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.797889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.802159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.802483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.802515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.806601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.806901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.806932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.811268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.811583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.811613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.815798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.816120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.816166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.820398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.820722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.820756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.825014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.825325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.825356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.829607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.829950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.829995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.834253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.834577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.834609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.838759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.012 [2024-07-12 06:42:18.839123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.012 [2024-07-12 06:42:18.839173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.012 [2024-07-12 06:42:18.843463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.843776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.843807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.848146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.848470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.848501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.852621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.852944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.852984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.857240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.857585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.857622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.861862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.862202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.862247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.866362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.866694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.866718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.871149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.871483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.871518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.875775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.876117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.876149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.880766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.881109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.881141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.885450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.885784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.885828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.890147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.890477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.890511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.894813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.895175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.895217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.899730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.900102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.900134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.904864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.905225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.905273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.910029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.910373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.910410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.915317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.915665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.915700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.920444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.920775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.920810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.925443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.925774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.925808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.013 [2024-07-12 06:42:18.930756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.013 [2024-07-12 06:42:18.931168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.013 [2024-07-12 06:42:18.931202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.935848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.936216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.936248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.941070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.941394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.941440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.945775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.946122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.946155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.950849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.951224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.951261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.955663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.955993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.956034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.960425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.960755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.960801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.965422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.965775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.965810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.970228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.970569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.970604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.975107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.975448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.975496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.979919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.980264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.980299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.984630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.984952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.984993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.989536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.989883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.989919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.994232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.994584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.994628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:18.998902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:18.999259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:18.999297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.003736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.004102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.004135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.008454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.008786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.008831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.013114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.013444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.013490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.017920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.018262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.018297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.022567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.022924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.022967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.027618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.027970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.028015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.032800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.033171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.033207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.037907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.038281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.038336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.043314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.043706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.043754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.048631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.048994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.049038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.053675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.054007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.054050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.058654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.058995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.274 [2024-07-12 06:42:19.059038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.274 [2024-07-12 06:42:19.063675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.274 [2024-07-12 06:42:19.064005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.064048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.068682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.069008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.069066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.073573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.073896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.073928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.078192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.078532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.078564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.082798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.083147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.083183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.087872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.088279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.088318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.093002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.093326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.093358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.097722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.098062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.098109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.102349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.102684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.102708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.106881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.107242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.107276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.111446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.111770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.111806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.116339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.116691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.116726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.121419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.121758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.121793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.126669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.126986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.127045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.131831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.132177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.132208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.136926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.137316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.137355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.141886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.142242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.142295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.146808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.147130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.147161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.151917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.152275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.152309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.156608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.156937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.156978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.161248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.161570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.161607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.166064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.166395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.166429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.170669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.170981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.171021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.175491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.175854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.175890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.180230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.180574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.180608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.184837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.185184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.185230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.275 [2024-07-12 06:42:19.189935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.275 [2024-07-12 06:42:19.190350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.275 [2024-07-12 06:42:19.190387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.535 [2024-07-12 06:42:19.195234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.535 [2024-07-12 06:42:19.195607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.535 [2024-07-12 06:42:19.195647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.535 [2024-07-12 06:42:19.200452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.535 [2024-07-12 06:42:19.200833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.535 [2024-07-12 06:42:19.200873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.535 [2024-07-12 06:42:19.205374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.535 [2024-07-12 06:42:19.205701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.535 [2024-07-12 06:42:19.205734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.535 [2024-07-12 06:42:19.210068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.535 [2024-07-12 06:42:19.210403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.535 [2024-07-12 06:42:19.210437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.535 [2024-07-12 06:42:19.215091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.535 [2024-07-12 06:42:19.215445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.535 [2024-07-12 06:42:19.215477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.535 [2024-07-12 06:42:19.219686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.535 [2024-07-12 06:42:19.220016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.535 [2024-07-12 06:42:19.220060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.535 [2024-07-12 06:42:19.224440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.535 [2024-07-12 06:42:19.224788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.535 [2024-07-12 06:42:19.224823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.535 [2024-07-12 06:42:19.229199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.535 [2024-07-12 06:42:19.229578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.535 [2024-07-12 06:42:19.229624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.535 [2024-07-12 06:42:19.234017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.535 [2024-07-12 06:42:19.234348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.535 [2024-07-12 06:42:19.234383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.535 [2024-07-12 06:42:19.238705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78ddd0) with pdu=0x2000190fef90 00:17:39.535 [2024-07-12 06:42:19.238806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.535 [2024-07-12 06:42:19.238827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.535 00:17:39.535 Latency(us) 00:17:39.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.535 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:39.535 nvme0n1 : 2.00 6424.46 803.06 0.00 0.00 2484.80 2010.76 8579.26 00:17:39.535 =================================================================================================================== 00:17:39.535 Total : 6424.46 803.06 0.00 0.00 2484.80 2010.76 8579.26 00:17:39.535 0 00:17:39.535 06:42:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:39.535 06:42:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:39.535 06:42:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:39.535 | .driver_specific 00:17:39.535 | .nvme_error 00:17:39.535 | .status_code 00:17:39.535 | .command_transient_transport_error' 00:17:39.535 06:42:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:39.796 06:42:19 -- host/digest.sh@71 -- # (( 415 > 0 )) 00:17:39.796 06:42:19 -- host/digest.sh@73 -- # killprocess 83625 00:17:39.796 06:42:19 -- common/autotest_common.sh@926 -- # '[' -z 83625 ']' 00:17:39.796 06:42:19 -- common/autotest_common.sh@930 -- # kill -0 83625 00:17:39.796 06:42:19 -- common/autotest_common.sh@931 -- # uname 00:17:39.796 06:42:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.796 06:42:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83625 00:17:39.796 killing process with pid 83625 00:17:39.796 Received shutdown signal, test time was about 2.000000 seconds 00:17:39.796 00:17:39.796 Latency(us) 00:17:39.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.796 =================================================================================================================== 00:17:39.796 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.796 06:42:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:39.796 06:42:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:39.796 06:42:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83625' 00:17:39.796 06:42:19 -- common/autotest_common.sh@945 -- # kill 83625 00:17:39.796 06:42:19 -- common/autotest_common.sh@950 -- # wait 83625 00:17:39.796 06:42:19 -- host/digest.sh@115 -- # killprocess 83446 00:17:39.796 06:42:19 -- common/autotest_common.sh@926 -- # '[' -z 83446 ']' 00:17:39.796 06:42:19 -- common/autotest_common.sh@930 -- # kill -0 83446 00:17:39.796 06:42:19 -- common/autotest_common.sh@931 -- # uname 00:17:39.796 06:42:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.796 06:42:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83446 00:17:39.796 killing process with pid 83446 00:17:39.796 06:42:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:39.796 06:42:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:39.796 06:42:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83446' 00:17:39.796 06:42:19 -- common/autotest_common.sh@945 -- # kill 83446 00:17:39.796 06:42:19 -- common/autotest_common.sh@950 -- # wait 83446 00:17:40.055 ************************************ 00:17:40.055 END TEST nvmf_digest_error 00:17:40.055 ************************************ 00:17:40.055 00:17:40.055 real 0m15.746s 00:17:40.055 user 0m30.844s 00:17:40.055 sys 0m4.377s 00:17:40.055 06:42:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.055 06:42:19 -- common/autotest_common.sh@10 -- # set +x 00:17:40.055 06:42:19 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:17:40.055 06:42:19 -- host/digest.sh@139 -- # nvmftestfini 00:17:40.055 06:42:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:40.055 06:42:19 -- nvmf/common.sh@116 -- # sync 00:17:40.055 06:42:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:40.055 06:42:19 -- nvmf/common.sh@119 -- # set +e 00:17:40.055 06:42:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:40.055 06:42:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:40.055 rmmod nvme_tcp 00:17:40.055 rmmod nvme_fabrics 00:17:40.055 rmmod nvme_keyring 00:17:40.315 06:42:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:40.315 06:42:19 -- nvmf/common.sh@123 -- # set -e 00:17:40.315 06:42:19 -- nvmf/common.sh@124 -- # return 0 00:17:40.315 06:42:19 -- nvmf/common.sh@477 -- # '[' -n 83446 ']' 00:17:40.315 06:42:19 -- nvmf/common.sh@478 -- # killprocess 83446 00:17:40.315 06:42:19 -- common/autotest_common.sh@926 -- # '[' -z 83446 ']' 00:17:40.315 06:42:19 -- common/autotest_common.sh@930 -- # kill -0 83446 00:17:40.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (83446) - No such process 00:17:40.315 Process with pid 83446 is not found 00:17:40.315 06:42:19 -- common/autotest_common.sh@953 -- # echo 'Process with pid 83446 is not found' 00:17:40.315 06:42:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:40.315 06:42:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:40.315 06:42:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:40.315 06:42:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.315 06:42:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:40.315 06:42:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.315 06:42:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.315 06:42:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.315 06:42:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:40.315 00:17:40.315 real 0m32.426s 00:17:40.315 user 1m1.740s 00:17:40.315 sys 0m9.036s 00:17:40.315 06:42:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.315 ************************************ 00:17:40.315 END TEST nvmf_digest 00:17:40.315 06:42:20 -- common/autotest_common.sh@10 -- # set +x 00:17:40.315 ************************************ 00:17:40.315 06:42:20 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:17:40.315 06:42:20 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:17:40.315 06:42:20 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:40.315 06:42:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:40.315 06:42:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.315 06:42:20 -- common/autotest_common.sh@10 -- # set +x 00:17:40.315 ************************************ 00:17:40.315 START TEST nvmf_multipath 00:17:40.315 ************************************ 00:17:40.315 06:42:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:40.315 * Looking for test storage... 00:17:40.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:40.315 06:42:20 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.315 06:42:20 -- nvmf/common.sh@7 -- # uname -s 00:17:40.315 06:42:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.315 06:42:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.315 06:42:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.315 06:42:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.315 06:42:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.315 06:42:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.315 06:42:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.315 06:42:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.315 06:42:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.315 06:42:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.315 06:42:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:17:40.315 06:42:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:17:40.315 06:42:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.315 06:42:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.315 06:42:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.315 06:42:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.315 06:42:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.315 06:42:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.315 06:42:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.316 06:42:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.316 06:42:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.316 06:42:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.316 06:42:20 -- paths/export.sh@5 -- # export PATH 00:17:40.316 06:42:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.316 06:42:20 -- nvmf/common.sh@46 -- # : 0 00:17:40.316 06:42:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:40.316 06:42:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:40.316 06:42:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:40.316 06:42:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.316 06:42:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.316 06:42:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:40.316 06:42:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:40.316 06:42:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:40.316 06:42:20 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.316 06:42:20 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.316 06:42:20 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.316 06:42:20 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:40.316 06:42:20 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.316 06:42:20 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:40.316 06:42:20 -- host/multipath.sh@30 -- # nvmftestinit 00:17:40.316 06:42:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:40.316 06:42:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.316 06:42:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:40.316 06:42:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:40.316 06:42:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:40.316 06:42:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.316 06:42:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.316 06:42:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.316 06:42:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:40.316 06:42:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:40.316 06:42:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:40.316 06:42:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:40.316 06:42:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:40.316 06:42:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:40.316 06:42:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.316 06:42:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.316 06:42:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:40.316 06:42:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:40.316 06:42:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.316 06:42:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.316 06:42:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.316 06:42:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.316 06:42:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.316 06:42:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.316 06:42:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.316 06:42:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.316 06:42:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:40.316 06:42:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:40.316 Cannot find device "nvmf_tgt_br" 00:17:40.316 06:42:20 -- nvmf/common.sh@154 -- # true 00:17:40.316 06:42:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.575 Cannot find device "nvmf_tgt_br2" 00:17:40.575 06:42:20 -- nvmf/common.sh@155 -- # true 00:17:40.575 06:42:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:40.575 06:42:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:40.575 Cannot find device "nvmf_tgt_br" 00:17:40.575 06:42:20 -- nvmf/common.sh@157 -- # true 00:17:40.575 06:42:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:40.575 Cannot find device "nvmf_tgt_br2" 00:17:40.575 06:42:20 -- nvmf/common.sh@158 -- # true 00:17:40.575 06:42:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:40.575 06:42:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:40.575 06:42:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:40.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.575 06:42:20 -- nvmf/common.sh@161 -- # true 00:17:40.575 06:42:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:40.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.575 06:42:20 -- nvmf/common.sh@162 -- # true 00:17:40.575 06:42:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:40.575 06:42:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:40.575 06:42:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:40.575 06:42:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:40.575 06:42:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:40.575 06:42:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:40.575 06:42:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:40.575 06:42:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:40.575 06:42:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:40.575 06:42:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:40.575 06:42:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:40.575 06:42:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:40.575 06:42:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:40.575 06:42:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:40.575 06:42:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:40.575 06:42:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:40.575 06:42:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:40.575 06:42:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:40.575 06:42:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:40.575 06:42:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:40.834 06:42:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:40.834 06:42:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:40.834 06:42:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:40.834 06:42:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:40.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:17:40.834 00:17:40.834 --- 10.0.0.2 ping statistics --- 00:17:40.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.834 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:40.834 06:42:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:40.834 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:40.834 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:40.834 00:17:40.834 --- 10.0.0.3 ping statistics --- 00:17:40.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.834 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:40.834 06:42:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:40.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:40.834 00:17:40.834 --- 10.0.0.1 ping statistics --- 00:17:40.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.834 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:40.834 06:42:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.834 06:42:20 -- nvmf/common.sh@421 -- # return 0 00:17:40.834 06:42:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:40.834 06:42:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.834 06:42:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:40.834 06:42:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:40.834 06:42:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.834 06:42:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:40.834 06:42:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:40.834 06:42:20 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:40.834 06:42:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:40.834 06:42:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:40.834 06:42:20 -- common/autotest_common.sh@10 -- # set +x 00:17:40.834 06:42:20 -- nvmf/common.sh@469 -- # nvmfpid=83890 00:17:40.834 06:42:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:40.834 06:42:20 -- nvmf/common.sh@470 -- # waitforlisten 83890 00:17:40.834 06:42:20 -- common/autotest_common.sh@819 -- # '[' -z 83890 ']' 00:17:40.834 06:42:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.834 06:42:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:40.834 06:42:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.834 06:42:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:40.834 06:42:20 -- common/autotest_common.sh@10 -- # set +x 00:17:40.834 [2024-07-12 06:42:20.610773] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:40.834 [2024-07-12 06:42:20.610845] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.834 [2024-07-12 06:42:20.754976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:41.093 [2024-07-12 06:42:20.800030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:41.093 [2024-07-12 06:42:20.800255] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.093 [2024-07-12 06:42:20.800282] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.093 [2024-07-12 06:42:20.800292] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.093 [2024-07-12 06:42:20.800389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.093 [2024-07-12 06:42:20.800405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.028 06:42:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:42.028 06:42:21 -- common/autotest_common.sh@852 -- # return 0 00:17:42.028 06:42:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:42.028 06:42:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:42.028 06:42:21 -- common/autotest_common.sh@10 -- # set +x 00:17:42.028 06:42:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.028 06:42:21 -- host/multipath.sh@33 -- # nvmfapp_pid=83890 00:17:42.028 06:42:21 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:42.028 [2024-07-12 06:42:21.929528] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.028 06:42:21 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:42.286 Malloc0 00:17:42.544 06:42:22 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:42.802 06:42:22 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.061 06:42:22 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.061 [2024-07-12 06:42:22.964267] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.319 06:42:22 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:43.319 [2024-07-12 06:42:23.212466] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:43.319 06:42:23 -- host/multipath.sh@44 -- # bdevperf_pid=83946 00:17:43.319 06:42:23 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:43.319 06:42:23 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.319 06:42:23 -- host/multipath.sh@47 -- # waitforlisten 83946 /var/tmp/bdevperf.sock 00:17:43.319 06:42:23 -- common/autotest_common.sh@819 -- # '[' -z 83946 ']' 00:17:43.319 06:42:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.319 06:42:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:43.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.319 06:42:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.319 06:42:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:43.319 06:42:23 -- common/autotest_common.sh@10 -- # set +x 00:17:44.692 06:42:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:44.692 06:42:24 -- common/autotest_common.sh@852 -- # return 0 00:17:44.692 06:42:24 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:44.692 06:42:24 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:44.950 Nvme0n1 00:17:44.950 06:42:24 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:45.516 Nvme0n1 00:17:45.516 06:42:25 -- host/multipath.sh@78 -- # sleep 1 00:17:45.516 06:42:25 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:46.477 06:42:26 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:46.477 06:42:26 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:46.735 06:42:26 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:46.993 06:42:26 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:46.993 06:42:26 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83890 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:46.993 06:42:26 -- host/multipath.sh@65 -- # dtrace_pid=83991 00:17:46.993 06:42:26 -- host/multipath.sh@66 -- # sleep 6 00:17:53.554 06:42:32 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:53.554 06:42:32 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:53.554 Attaching 4 probes... 00:17:53.554 @path[10.0.0.2, 4421]: 16965 00:17:53.554 @path[10.0.0.2, 4421]: 17367 00:17:53.554 @path[10.0.0.2, 4421]: 17324 00:17:53.554 @path[10.0.0.2, 4421]: 17445 00:17:53.554 @path[10.0.0.2, 4421]: 17461 00:17:53.554 06:42:32 -- host/multipath.sh@67 -- # active_port=4421 00:17:53.554 06:42:32 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:53.554 06:42:32 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:53.554 06:42:32 -- host/multipath.sh@69 -- # sed -n 1p 00:17:53.554 06:42:32 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:53.554 06:42:32 -- host/multipath.sh@69 -- # port=4421 00:17:53.554 06:42:32 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:53.554 06:42:32 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:53.554 06:42:32 -- host/multipath.sh@72 -- # kill 83991 00:17:53.554 06:42:32 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:53.554 06:42:32 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:53.554 06:42:32 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:53.554 06:42:33 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:53.812 06:42:33 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:53.812 06:42:33 -- host/multipath.sh@65 -- # dtrace_pid=84110 00:17:53.812 06:42:33 -- host/multipath.sh@66 -- # sleep 6 00:17:53.812 06:42:33 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83890 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:00.370 06:42:39 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:00.370 06:42:39 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:00.370 06:42:39 -- host/multipath.sh@67 -- # active_port=4420 00:18:00.370 06:42:39 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:00.370 Attaching 4 probes... 00:18:00.370 @path[10.0.0.2, 4420]: 17317 00:18:00.370 @path[10.0.0.2, 4420]: 17462 00:18:00.370 @path[10.0.0.2, 4420]: 17559 00:18:00.370 @path[10.0.0.2, 4420]: 17596 00:18:00.370 @path[10.0.0.2, 4420]: 17596 00:18:00.370 06:42:39 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:00.370 06:42:39 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:00.370 06:42:39 -- host/multipath.sh@69 -- # sed -n 1p 00:18:00.370 06:42:39 -- host/multipath.sh@69 -- # port=4420 00:18:00.370 06:42:39 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:00.370 06:42:39 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:00.370 06:42:39 -- host/multipath.sh@72 -- # kill 84110 00:18:00.370 06:42:39 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:00.370 06:42:39 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:00.370 06:42:39 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:00.370 06:42:40 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:00.628 06:42:40 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:00.628 06:42:40 -- host/multipath.sh@65 -- # dtrace_pid=84222 00:18:00.628 06:42:40 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83890 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:00.628 06:42:40 -- host/multipath.sh@66 -- # sleep 6 00:18:07.190 06:42:46 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:07.190 06:42:46 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:07.190 06:42:46 -- host/multipath.sh@67 -- # active_port=4421 00:18:07.191 06:42:46 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:07.191 Attaching 4 probes... 00:18:07.191 @path[10.0.0.2, 4421]: 14460 00:18:07.191 @path[10.0.0.2, 4421]: 17299 00:18:07.191 @path[10.0.0.2, 4421]: 17283 00:18:07.191 @path[10.0.0.2, 4421]: 17335 00:18:07.191 @path[10.0.0.2, 4421]: 17302 00:18:07.191 06:42:46 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:07.191 06:42:46 -- host/multipath.sh@69 -- # sed -n 1p 00:18:07.191 06:42:46 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:07.191 06:42:46 -- host/multipath.sh@69 -- # port=4421 00:18:07.191 06:42:46 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:07.191 06:42:46 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:07.191 06:42:46 -- host/multipath.sh@72 -- # kill 84222 00:18:07.191 06:42:46 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:07.191 06:42:46 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:07.191 06:42:46 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:07.191 06:42:46 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:07.450 06:42:47 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:07.450 06:42:47 -- host/multipath.sh@65 -- # dtrace_pid=84340 00:18:07.450 06:42:47 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83890 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:07.450 06:42:47 -- host/multipath.sh@66 -- # sleep 6 00:18:14.015 06:42:53 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:14.015 06:42:53 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:14.015 06:42:53 -- host/multipath.sh@67 -- # active_port= 00:18:14.015 06:42:53 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.015 Attaching 4 probes... 00:18:14.015 00:18:14.015 00:18:14.015 00:18:14.015 00:18:14.015 00:18:14.015 06:42:53 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:14.015 06:42:53 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:14.015 06:42:53 -- host/multipath.sh@69 -- # sed -n 1p 00:18:14.015 06:42:53 -- host/multipath.sh@69 -- # port= 00:18:14.015 06:42:53 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:14.015 06:42:53 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:14.015 06:42:53 -- host/multipath.sh@72 -- # kill 84340 00:18:14.015 06:42:53 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.015 06:42:53 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:14.015 06:42:53 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:14.015 06:42:53 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:14.274 06:42:53 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:14.274 06:42:53 -- host/multipath.sh@65 -- # dtrace_pid=84458 00:18:14.274 06:42:53 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83890 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:14.274 06:42:53 -- host/multipath.sh@66 -- # sleep 6 00:18:20.835 06:42:59 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:20.835 06:42:59 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:20.835 06:43:00 -- host/multipath.sh@67 -- # active_port=4421 00:18:20.835 06:43:00 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:20.835 Attaching 4 probes... 00:18:20.835 @path[10.0.0.2, 4421]: 16634 00:18:20.835 @path[10.0.0.2, 4421]: 17068 00:18:20.835 @path[10.0.0.2, 4421]: 17021 00:18:20.835 @path[10.0.0.2, 4421]: 16926 00:18:20.835 @path[10.0.0.2, 4421]: 17079 00:18:20.835 06:43:00 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:20.835 06:43:00 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:20.835 06:43:00 -- host/multipath.sh@69 -- # sed -n 1p 00:18:20.835 06:43:00 -- host/multipath.sh@69 -- # port=4421 00:18:20.835 06:43:00 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:20.835 06:43:00 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:20.835 06:43:00 -- host/multipath.sh@72 -- # kill 84458 00:18:20.835 06:43:00 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:20.835 06:43:00 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:20.835 [2024-07-12 06:43:00.498266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.835 [2024-07-12 06:43:00.498321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 [2024-07-12 06:43:00.498732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713ee0 is same with the state(5) to be set 00:18:20.836 06:43:00 -- host/multipath.sh@101 -- # sleep 1 00:18:21.772 06:43:01 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:21.772 06:43:01 -- host/multipath.sh@65 -- # dtrace_pid=84576 00:18:21.772 06:43:01 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83890 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:21.772 06:43:01 -- host/multipath.sh@66 -- # sleep 6 00:18:28.355 06:43:07 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:28.355 06:43:07 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:28.355 06:43:07 -- host/multipath.sh@67 -- # active_port=4420 00:18:28.355 06:43:07 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.355 Attaching 4 probes... 00:18:28.355 @path[10.0.0.2, 4420]: 19421 00:18:28.355 @path[10.0.0.2, 4420]: 17521 00:18:28.355 @path[10.0.0.2, 4420]: 16908 00:18:28.355 @path[10.0.0.2, 4420]: 16927 00:18:28.355 @path[10.0.0.2, 4420]: 16969 00:18:28.355 06:43:07 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:28.355 06:43:07 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:28.355 06:43:07 -- host/multipath.sh@69 -- # sed -n 1p 00:18:28.355 06:43:07 -- host/multipath.sh@69 -- # port=4420 00:18:28.355 06:43:07 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:28.355 06:43:07 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:28.355 06:43:07 -- host/multipath.sh@72 -- # kill 84576 00:18:28.355 06:43:07 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.355 06:43:07 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:28.355 [2024-07-12 06:43:08.033792] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:28.355 06:43:08 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:28.614 06:43:08 -- host/multipath.sh@111 -- # sleep 6 00:18:35.179 06:43:14 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:35.179 06:43:14 -- host/multipath.sh@65 -- # dtrace_pid=84756 00:18:35.179 06:43:14 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83890 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:35.179 06:43:14 -- host/multipath.sh@66 -- # sleep 6 00:18:40.446 06:43:20 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:40.446 06:43:20 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:40.704 06:43:20 -- host/multipath.sh@67 -- # active_port=4421 00:18:40.704 06:43:20 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:40.704 Attaching 4 probes... 00:18:40.704 @path[10.0.0.2, 4421]: 16195 00:18:40.704 @path[10.0.0.2, 4421]: 16640 00:18:40.704 @path[10.0.0.2, 4421]: 16787 00:18:40.704 @path[10.0.0.2, 4421]: 16631 00:18:40.704 @path[10.0.0.2, 4421]: 16800 00:18:40.704 06:43:20 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:40.704 06:43:20 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:40.704 06:43:20 -- host/multipath.sh@69 -- # sed -n 1p 00:18:40.704 06:43:20 -- host/multipath.sh@69 -- # port=4421 00:18:40.704 06:43:20 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:40.704 06:43:20 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:40.704 06:43:20 -- host/multipath.sh@72 -- # kill 84756 00:18:40.704 06:43:20 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:40.971 06:43:20 -- host/multipath.sh@114 -- # killprocess 83946 00:18:40.971 06:43:20 -- common/autotest_common.sh@926 -- # '[' -z 83946 ']' 00:18:40.971 06:43:20 -- common/autotest_common.sh@930 -- # kill -0 83946 00:18:40.971 06:43:20 -- common/autotest_common.sh@931 -- # uname 00:18:40.971 06:43:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:40.971 06:43:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83946 00:18:40.971 06:43:20 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:40.971 killing process with pid 83946 00:18:40.971 06:43:20 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:40.971 06:43:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83946' 00:18:40.971 06:43:20 -- common/autotest_common.sh@945 -- # kill 83946 00:18:40.971 06:43:20 -- common/autotest_common.sh@950 -- # wait 83946 00:18:40.971 Connection closed with partial response: 00:18:40.971 00:18:40.971 00:18:40.971 06:43:20 -- host/multipath.sh@116 -- # wait 83946 00:18:40.971 06:43:20 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:40.971 [2024-07-12 06:42:23.275381] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:40.971 [2024-07-12 06:42:23.275500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83946 ] 00:18:40.971 [2024-07-12 06:42:23.412491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.971 [2024-07-12 06:42:23.451456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.971 Running I/O for 90 seconds... 00:18:40.971 [2024-07-12 06:42:33.517695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.971 [2024-07-12 06:42:33.517773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.517828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.971 [2024-07-12 06:42:33.517848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.517872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.971 [2024-07-12 06:42:33.517887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.517909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.971 [2024-07-12 06:42:33.517923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.517944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.971 [2024-07-12 06:42:33.517975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.971 [2024-07-12 06:42:33.518031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.971 [2024-07-12 06:42:33.518084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.971 [2024-07-12 06:42:33.518133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.971 [2024-07-12 06:42:33.518167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.971 [2024-07-12 06:42:33.518202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.971 [2024-07-12 06:42:33.518256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.971 [2024-07-12 06:42:33.518311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.971 [2024-07-12 06:42:33.518377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.971 [2024-07-12 06:42:33.518426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.971 [2024-07-12 06:42:33.518477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.971 [2024-07-12 06:42:33.518513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.971 [2024-07-12 06:42:33.518548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.971 [2024-07-12 06:42:33.518584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:40.971 [2024-07-12 06:42:33.518606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.971 [2024-07-12 06:42:33.518631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.518654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.518670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.518691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.518705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.518726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.518740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.518762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.518776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.518805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.518820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.518841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.518855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.518877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.518891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.518912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.518925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.518947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.518974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.518998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.972 [2024-07-12 06:42:33.519212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.972 [2024-07-12 06:42:33.519380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.972 [2024-07-12 06:42:33.519433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.972 [2024-07-12 06:42:33.519505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.972 [2024-07-12 06:42:33.519646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.972 [2024-07-12 06:42:33.519681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.519975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.519993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.520015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.520030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.520051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.972 [2024-07-12 06:42:33.520065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.520086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.972 [2024-07-12 06:42:33.520101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.520122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.520136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.520158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.972 [2024-07-12 06:42:33.520187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.520224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.972 [2024-07-12 06:42:33.520239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.520275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.972 [2024-07-12 06:42:33.520289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.520325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.520339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.520361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.972 [2024-07-12 06:42:33.520383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:40.972 [2024-07-12 06:42:33.520405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.520431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.520467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.520502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.520538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.520588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.520641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.520681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.520716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.520752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.520788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.520859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.520912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.520950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.520972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.520987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.973 [2024-07-12 06:42:33.521928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.521949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.521963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.522016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.522032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.522053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.522082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.522103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.973 [2024-07-12 06:42:33.522117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:40.973 [2024-07-12 06:42:33.523791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.523823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.523853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.523870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.523892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.523907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.523928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.523943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.523981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.524177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.524287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.524360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.524397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.524433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.524609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.524648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:33.524795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:33.524832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:33.524863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.080986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:40.081077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:40.081252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:40.081330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:40.081380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:40.081845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.974 [2024-07-12 06:42:40.081879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.974 [2024-07-12 06:42:40.081902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.974 [2024-07-12 06:42:40.081916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.081937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.081950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.082086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.082134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.975 [2024-07-12 06:42:40.082199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.975 [2024-07-12 06:42:40.082234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.082269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.975 [2024-07-12 06:42:40.082463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.082527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.975 [2024-07-12 06:42:40.082593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.082657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.082695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.082731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.082768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.082878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.082915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.082953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.082988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.083007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.083044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.083059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:40.975 [2024-07-12 06:42:40.083080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.975 [2024-07-12 06:42:40.083094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.976 [2024-07-12 06:42:40.083340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.976 [2024-07-12 06:42:40.083374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.976 [2024-07-12 06:42:40.083486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.976 [2024-07-12 06:42:40.083520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.976 [2024-07-12 06:42:40.083871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.976 [2024-07-12 06:42:40.083911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.976 [2024-07-12 06:42:40.083946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:40.976 [2024-07-12 06:42:40.083967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.083980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.084086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.084121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.084181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.084217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.084412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.084501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.084549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.084971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.084992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.085041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.085232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.085305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.085375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.085410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.085660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.085745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.085831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.977 [2024-07-12 06:42:40.085934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.085955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.977 [2024-07-12 06:42:40.085972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:40.977 [2024-07-12 06:42:40.086010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.086024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.086046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.086060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.086082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.086097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.086119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.086143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.087573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.087617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.087652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.087682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.087713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.087728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.087757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.087787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.087816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.978 [2024-07-12 06:42:40.087831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.087862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.087877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.087907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.087921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.087951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.978 [2024-07-12 06:42:40.087966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.087995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.088010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.088056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.978 [2024-07-12 06:42:40.088131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.088177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.978 [2024-07-12 06:42:40.088223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.088276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.978 [2024-07-12 06:42:40.088323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.088368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.088413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.978 [2024-07-12 06:42:40.088457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.088502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.978 [2024-07-12 06:42:40.088546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:40.088576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:40.088590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.978 [2024-07-12 06:42:47.180414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.180491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.180545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.180582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.180617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.180674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.180711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.180747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.978 [2024-07-12 06:42:47.180783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.978 [2024-07-12 06:42:47.180819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.180855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.978 [2024-07-12 06:42:47.180890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.180926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.180948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.180962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.181014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.181030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.181051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.181065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.181086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.978 [2024-07-12 06:42:47.181100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:40.978 [2024-07-12 06:42:47.181130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.181146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.181198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.181234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.181270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.181306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.181342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.181377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.181414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.181456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.181492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.181528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.181909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.181933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.181956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.182026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.182065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.182102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.182141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.182178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.182290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.182650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.182699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.182775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.182813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.182931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.182966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.182984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.183007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.183023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.183045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.183060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.183082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.979 [2024-07-12 06:42:47.183097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.183119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.979 [2024-07-12 06:42:47.183134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:40.979 [2024-07-12 06:42:47.183156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.980 [2024-07-12 06:42:47.183171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.980 [2024-07-12 06:42:47.183772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.980 [2024-07-12 06:42:47.183846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.183950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.183978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.980 [2024-07-12 06:42:47.184017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.184054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.980 [2024-07-12 06:42:47.184092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.980 [2024-07-12 06:42:47.184129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.980 [2024-07-12 06:42:47.184166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.980 [2024-07-12 06:42:47.184203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.184240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.184277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.184314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.184359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.184414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.184467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.184518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:40.980 [2024-07-12 06:42:47.184556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.980 [2024-07-12 06:42:47.184570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.184593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.184607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.184629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.184644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.184666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.184681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.184703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.184717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.185712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.185741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.185791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.185807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.185836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.185850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.185880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.981 [2024-07-12 06:42:47.185905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.185936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.185950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.185995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.186037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.186104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.981 [2024-07-12 06:42:47.186150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.186196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.981 [2024-07-12 06:42:47.186241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.186286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.981 [2024-07-12 06:42:47.186331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.186376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.186421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.981 [2024-07-12 06:42:47.186466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.981 [2024-07-12 06:42:47.186518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.186566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:42:47.186596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:42:47.186611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.498795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.498840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.498866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.498882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.498898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.498911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.498926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.498939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.498967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.498984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.981 [2024-07-12 06:43:00.499431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.981 [2024-07-12 06:43:00.499446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.499582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.499611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.499696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.499933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.499974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.499991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.500064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.500093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.500121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.500208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.500236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.500264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.500558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.500615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.982 [2024-07-12 06:43:00.500674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.982 [2024-07-12 06:43:00.500765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.982 [2024-07-12 06:43:00.500780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.500793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.500809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.500827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.500842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.500856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.500871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.500884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.500899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.500912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.500928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.500941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.500956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.500970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.500985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.500999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.501280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.501311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.501397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.501489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.501546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.501574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.501631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.501957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.501994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.502011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.502025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.502040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.983 [2024-07-12 06:43:00.502054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.502068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.983 [2024-07-12 06:43:00.502082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.983 [2024-07-12 06:43:00.502097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.984 [2024-07-12 06:43:00.502139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.984 [2024-07-12 06:43:00.502167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.984 [2024-07-12 06:43:00.502224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.984 [2024-07-12 06:43:00.502378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.984 [2024-07-12 06:43:00.502437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.984 [2024-07-12 06:43:00.502466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.984 [2024-07-12 06:43:00.502494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.984 [2024-07-12 06:43:00.502523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.984 [2024-07-12 06:43:00.502551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.984 [2024-07-12 06:43:00.502744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ea3d0 is same with the state(5) to be set 00:18:40.984 [2024-07-12 06:43:00.502775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:40.984 [2024-07-12 06:43:00.502785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:40.984 [2024-07-12 06:43:00.502799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31104 len:8 PRP1 0x0 PRP2 0x0 00:18:40.984 [2024-07-12 06:43:00.502812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.984 [2024-07-12 06:43:00.502859] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8ea3d0 was disconnected and freed. reset controller. 00:18:40.984 [2024-07-12 06:43:00.503948] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:40.984 [2024-07-12 06:43:00.504047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89d920 (9): Bad file descriptor 00:18:40.984 [2024-07-12 06:43:00.504415] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.984 [2024-07-12 06:43:00.504490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.984 [2024-07-12 06:43:00.504542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.984 [2024-07-12 06:43:00.504565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x89d920 with addr=10.0.0.2, port=4421 00:18:40.984 [2024-07-12 06:43:00.504580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d920 is same with the state(5) to be set 00:18:40.984 [2024-07-12 06:43:00.504614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89d920 (9): Bad file descriptor 00:18:40.984 [2024-07-12 06:43:00.504644] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:40.984 [2024-07-12 06:43:00.504660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:40.984 [2024-07-12 06:43:00.504675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:40.984 [2024-07-12 06:43:00.504708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:40.984 [2024-07-12 06:43:00.504724] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:40.984 [2024-07-12 06:43:10.559870] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:40.984 Received shutdown signal, test time was about 55.369221 seconds 00:18:40.984 00:18:40.984 Latency(us) 00:18:40.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.984 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:40.984 Verification LBA range: start 0x0 length 0x4000 00:18:40.984 Nvme0n1 : 55.37 9817.80 38.35 0.00 0.00 13015.89 294.17 7015926.69 00:18:40.984 =================================================================================================================== 00:18:40.984 Total : 9817.80 38.35 0.00 0.00 13015.89 294.17 7015926.69 00:18:40.984 06:43:20 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.243 06:43:21 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:41.243 06:43:21 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:41.243 06:43:21 -- host/multipath.sh@125 -- # nvmftestfini 00:18:41.243 06:43:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:41.243 06:43:21 -- nvmf/common.sh@116 -- # sync 00:18:41.243 06:43:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:41.243 06:43:21 -- nvmf/common.sh@119 -- # set +e 00:18:41.243 06:43:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:41.243 06:43:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:41.243 rmmod nvme_tcp 00:18:41.243 rmmod nvme_fabrics 00:18:41.243 rmmod nvme_keyring 00:18:41.502 06:43:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:41.502 06:43:21 -- nvmf/common.sh@123 -- # set -e 00:18:41.502 06:43:21 -- nvmf/common.sh@124 -- # return 0 00:18:41.502 06:43:21 -- nvmf/common.sh@477 -- # '[' -n 83890 ']' 00:18:41.502 06:43:21 -- nvmf/common.sh@478 -- # killprocess 83890 00:18:41.502 06:43:21 -- common/autotest_common.sh@926 -- # '[' -z 83890 ']' 00:18:41.502 06:43:21 -- common/autotest_common.sh@930 -- # kill -0 83890 00:18:41.502 06:43:21 -- common/autotest_common.sh@931 -- # uname 00:18:41.502 06:43:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:41.502 06:43:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83890 00:18:41.502 killing process with pid 83890 00:18:41.502 06:43:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:41.502 06:43:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:41.502 06:43:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83890' 00:18:41.502 06:43:21 -- common/autotest_common.sh@945 -- # kill 83890 00:18:41.502 06:43:21 -- common/autotest_common.sh@950 -- # wait 83890 00:18:41.502 06:43:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:41.502 06:43:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:41.502 06:43:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:41.502 06:43:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:41.502 06:43:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:41.502 06:43:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.502 06:43:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.502 06:43:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.502 06:43:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:41.502 00:18:41.502 real 1m1.331s 00:18:41.502 user 2m49.898s 00:18:41.502 sys 0m18.529s 00:18:41.502 06:43:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.502 06:43:21 -- common/autotest_common.sh@10 -- # set +x 00:18:41.502 ************************************ 00:18:41.502 END TEST nvmf_multipath 00:18:41.502 ************************************ 00:18:41.771 06:43:21 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:41.771 06:43:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:41.771 06:43:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:41.771 06:43:21 -- common/autotest_common.sh@10 -- # set +x 00:18:41.771 ************************************ 00:18:41.771 START TEST nvmf_timeout 00:18:41.771 ************************************ 00:18:41.771 06:43:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:41.771 * Looking for test storage... 00:18:41.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:41.771 06:43:21 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:41.771 06:43:21 -- nvmf/common.sh@7 -- # uname -s 00:18:41.771 06:43:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.771 06:43:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.771 06:43:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.771 06:43:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.771 06:43:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.771 06:43:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.771 06:43:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.771 06:43:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.771 06:43:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.771 06:43:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.771 06:43:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:18:41.771 06:43:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:18:41.771 06:43:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.771 06:43:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.771 06:43:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:41.771 06:43:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.771 06:43:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.771 06:43:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.771 06:43:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.771 06:43:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.771 06:43:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.771 06:43:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.771 06:43:21 -- paths/export.sh@5 -- # export PATH 00:18:41.771 06:43:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.771 06:43:21 -- nvmf/common.sh@46 -- # : 0 00:18:41.771 06:43:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:41.771 06:43:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:41.771 06:43:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:41.771 06:43:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.771 06:43:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.771 06:43:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:41.771 06:43:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:41.771 06:43:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:41.771 06:43:21 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:41.771 06:43:21 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:41.771 06:43:21 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:41.771 06:43:21 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:41.771 06:43:21 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:41.771 06:43:21 -- host/timeout.sh@19 -- # nvmftestinit 00:18:41.771 06:43:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:41.771 06:43:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.771 06:43:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:41.771 06:43:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:41.771 06:43:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:41.771 06:43:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.771 06:43:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.771 06:43:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.771 06:43:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:41.771 06:43:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:41.771 06:43:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:41.771 06:43:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:41.771 06:43:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:41.771 06:43:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:41.771 06:43:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.771 06:43:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.771 06:43:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:41.771 06:43:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:41.771 06:43:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:41.772 06:43:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:41.772 06:43:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:41.772 06:43:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.772 06:43:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:41.772 06:43:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:41.772 06:43:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:41.772 06:43:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:41.772 06:43:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:41.772 06:43:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:41.772 Cannot find device "nvmf_tgt_br" 00:18:41.772 06:43:21 -- nvmf/common.sh@154 -- # true 00:18:41.772 06:43:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.772 Cannot find device "nvmf_tgt_br2" 00:18:41.772 06:43:21 -- nvmf/common.sh@155 -- # true 00:18:41.772 06:43:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:41.772 06:43:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:41.772 Cannot find device "nvmf_tgt_br" 00:18:41.772 06:43:21 -- nvmf/common.sh@157 -- # true 00:18:41.772 06:43:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:41.772 Cannot find device "nvmf_tgt_br2" 00:18:41.772 06:43:21 -- nvmf/common.sh@158 -- # true 00:18:41.772 06:43:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:41.772 06:43:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:42.043 06:43:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:42.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.043 06:43:21 -- nvmf/common.sh@161 -- # true 00:18:42.043 06:43:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:42.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.043 06:43:21 -- nvmf/common.sh@162 -- # true 00:18:42.043 06:43:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:42.043 06:43:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:42.043 06:43:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:42.043 06:43:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:42.043 06:43:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:42.043 06:43:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:42.043 06:43:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:42.043 06:43:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:42.043 06:43:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:42.043 06:43:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:42.043 06:43:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:42.043 06:43:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:42.043 06:43:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:42.043 06:43:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:42.043 06:43:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:42.043 06:43:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:42.043 06:43:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:42.043 06:43:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:42.043 06:43:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:42.043 06:43:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:42.043 06:43:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:42.043 06:43:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:42.043 06:43:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:42.043 06:43:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:42.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:18:42.043 00:18:42.043 --- 10.0.0.2 ping statistics --- 00:18:42.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.043 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:42.043 06:43:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:42.043 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:42.043 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:18:42.043 00:18:42.043 --- 10.0.0.3 ping statistics --- 00:18:42.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.043 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:42.043 06:43:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:42.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:42.043 00:18:42.043 --- 10.0.0.1 ping statistics --- 00:18:42.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.043 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:42.043 06:43:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.043 06:43:21 -- nvmf/common.sh@421 -- # return 0 00:18:42.043 06:43:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:42.043 06:43:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.043 06:43:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:42.043 06:43:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:42.043 06:43:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.043 06:43:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:42.043 06:43:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:42.043 06:43:21 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:42.043 06:43:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:42.043 06:43:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:42.043 06:43:21 -- common/autotest_common.sh@10 -- # set +x 00:18:42.043 06:43:21 -- nvmf/common.sh@469 -- # nvmfpid=85063 00:18:42.043 06:43:21 -- nvmf/common.sh@470 -- # waitforlisten 85063 00:18:42.043 06:43:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:42.043 06:43:21 -- common/autotest_common.sh@819 -- # '[' -z 85063 ']' 00:18:42.043 06:43:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.043 06:43:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:42.043 06:43:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.043 06:43:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:42.043 06:43:21 -- common/autotest_common.sh@10 -- # set +x 00:18:42.043 [2024-07-12 06:43:21.961658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:42.043 [2024-07-12 06:43:21.962429] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.302 [2024-07-12 06:43:22.103528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:42.302 [2024-07-12 06:43:22.144931] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:42.302 [2024-07-12 06:43:22.145178] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.302 [2024-07-12 06:43:22.145193] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.302 [2024-07-12 06:43:22.145202] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.302 [2024-07-12 06:43:22.145776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.302 [2024-07-12 06:43:22.145851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.253 06:43:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:43.253 06:43:22 -- common/autotest_common.sh@852 -- # return 0 00:18:43.253 06:43:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:43.253 06:43:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:43.253 06:43:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.253 06:43:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.253 06:43:23 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.253 06:43:23 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:43.512 [2024-07-12 06:43:23.273812] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.513 06:43:23 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:43.770 Malloc0 00:18:43.770 06:43:23 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:44.028 06:43:23 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:44.285 06:43:24 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.543 [2024-07-12 06:43:24.329061] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.543 06:43:24 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:44.543 06:43:24 -- host/timeout.sh@32 -- # bdevperf_pid=85118 00:18:44.543 06:43:24 -- host/timeout.sh@34 -- # waitforlisten 85118 /var/tmp/bdevperf.sock 00:18:44.543 06:43:24 -- common/autotest_common.sh@819 -- # '[' -z 85118 ']' 00:18:44.543 06:43:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.543 06:43:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:44.543 06:43:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.543 06:43:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:44.543 06:43:24 -- common/autotest_common.sh@10 -- # set +x 00:18:44.543 [2024-07-12 06:43:24.386486] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:44.543 [2024-07-12 06:43:24.386570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85118 ] 00:18:44.800 [2024-07-12 06:43:24.525085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.800 [2024-07-12 06:43:24.567369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.735 06:43:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:45.735 06:43:25 -- common/autotest_common.sh@852 -- # return 0 00:18:45.735 06:43:25 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:45.735 06:43:25 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:46.303 NVMe0n1 00:18:46.303 06:43:25 -- host/timeout.sh@51 -- # rpc_pid=85137 00:18:46.303 06:43:25 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:46.303 06:43:25 -- host/timeout.sh@53 -- # sleep 1 00:18:46.303 Running I/O for 10 seconds... 00:18:47.239 06:43:26 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.501 [2024-07-12 06:43:27.233445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099ea0 is same with the state(5) to be set 00:18:47.501 [2024-07-12 06:43:27.233500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099ea0 is same with the state(5) to be set 00:18:47.501 [2024-07-12 06:43:27.233528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099ea0 is same with the state(5) to be set 00:18:47.501 [2024-07-12 06:43:27.233536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099ea0 is same with the state(5) to be set 00:18:47.501 [2024-07-12 06:43:27.233544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099ea0 is same with the state(5) to be set 00:18:47.501 [2024-07-12 06:43:27.233553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099ea0 is same with the state(5) to be set 00:18:47.501 [2024-07-12 06:43:27.233562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099ea0 is same with the state(5) to be set 00:18:47.501 [2024-07-12 06:43:27.233570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099ea0 is same with the state(5) to be set 00:18:47.501 [2024-07-12 06:43:27.233579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099ea0 is same with the state(5) to be set 00:18:47.501 [2024-07-12 06:43:27.233587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099ea0 is same with the state(5) to be set 00:18:47.501 [2024-07-12 06:43:27.233595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099ea0 is same with the state(5) to be set 00:18:47.501 [2024-07-12 06:43:27.233649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.501 [2024-07-12 06:43:27.233939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.233982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.233993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.501 [2024-07-12 06:43:27.234220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.501 [2024-07-12 06:43:27.234241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.501 [2024-07-12 06:43:27.234302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.501 [2024-07-12 06:43:27.234446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.501 [2024-07-12 06:43:27.234467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.501 [2024-07-12 06:43:27.234487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.501 [2024-07-12 06:43:27.234528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.501 [2024-07-12 06:43:27.234548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.501 [2024-07-12 06:43:27.234569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.501 [2024-07-12 06:43:27.234609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.501 [2024-07-12 06:43:27.234785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.501 [2024-07-12 06:43:27.234796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.234806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.234817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.234826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.234837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.234846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.234858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.234867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.234878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.234887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.234899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.234908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.234919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.234928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.234939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.234948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.234968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.234978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.234989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.234998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.235775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.235981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.235990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.236094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.502 [2024-07-12 06:43:27.236154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.502 [2024-07-12 06:43:27.236307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.502 [2024-07-12 06:43:27.236316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.503 [2024-07-12 06:43:27.236327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128f320 is same with the state(5) to be set 00:18:47.503 [2024-07-12 06:43:27.236339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.503 [2024-07-12 06:43:27.236347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.503 [2024-07-12 06:43:27.236357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109104 len:8 PRP1 0x0 PRP2 0x0 00:18:47.503 [2024-07-12 06:43:27.236366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.503 [2024-07-12 06:43:27.236410] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x128f320 was disconnected and freed. reset controller. 00:18:47.503 [2024-07-12 06:43:27.236489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.503 [2024-07-12 06:43:27.236517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.503 [2024-07-12 06:43:27.236529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.503 [2024-07-12 06:43:27.236539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.503 [2024-07-12 06:43:27.236548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.503 [2024-07-12 06:43:27.236558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.503 [2024-07-12 06:43:27.236568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.503 [2024-07-12 06:43:27.236577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.503 [2024-07-12 06:43:27.236586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12943a0 is same with the state(5) to be set 00:18:47.503 [2024-07-12 06:43:27.236807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:47.503 [2024-07-12 06:43:27.236838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12943a0 (9): Bad file descriptor 00:18:47.503 [2024-07-12 06:43:27.236933] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:47.503 [2024-07-12 06:43:27.237019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:47.503 [2024-07-12 06:43:27.237063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:47.503 [2024-07-12 06:43:27.237080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12943a0 with addr=10.0.0.2, port=4420 00:18:47.503 [2024-07-12 06:43:27.237090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12943a0 is same with the state(5) to be set 00:18:47.503 [2024-07-12 06:43:27.237110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12943a0 (9): Bad file descriptor 00:18:47.503 [2024-07-12 06:43:27.237127] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:47.503 [2024-07-12 06:43:27.237136] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:47.503 [2024-07-12 06:43:27.249982] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:47.503 [2024-07-12 06:43:27.250038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:47.503 [2024-07-12 06:43:27.250055] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:47.503 06:43:27 -- host/timeout.sh@56 -- # sleep 2 00:18:49.399 [2024-07-12 06:43:29.250172] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.400 [2024-07-12 06:43:29.250308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.400 [2024-07-12 06:43:29.250384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.400 [2024-07-12 06:43:29.250401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12943a0 with addr=10.0.0.2, port=4420 00:18:49.400 [2024-07-12 06:43:29.250415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12943a0 is same with the state(5) to be set 00:18:49.400 [2024-07-12 06:43:29.250442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12943a0 (9): Bad file descriptor 00:18:49.400 [2024-07-12 06:43:29.250462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:49.400 [2024-07-12 06:43:29.250472] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:49.400 [2024-07-12 06:43:29.250482] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:49.400 [2024-07-12 06:43:29.250509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:49.400 [2024-07-12 06:43:29.250520] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:49.400 06:43:29 -- host/timeout.sh@57 -- # get_controller 00:18:49.400 06:43:29 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:49.400 06:43:29 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:49.657 06:43:29 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:49.657 06:43:29 -- host/timeout.sh@58 -- # get_bdev 00:18:49.657 06:43:29 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:49.657 06:43:29 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:49.915 06:43:29 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:49.915 06:43:29 -- host/timeout.sh@61 -- # sleep 5 00:18:51.815 [2024-07-12 06:43:31.250676] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.815 [2024-07-12 06:43:31.250763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.815 [2024-07-12 06:43:31.250807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.815 [2024-07-12 06:43:31.250824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12943a0 with addr=10.0.0.2, port=4420 00:18:51.815 [2024-07-12 06:43:31.250837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12943a0 is same with the state(5) to be set 00:18:51.815 [2024-07-12 06:43:31.250864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12943a0 (9): Bad file descriptor 00:18:51.815 [2024-07-12 06:43:31.250884] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:51.815 [2024-07-12 06:43:31.250894] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:51.815 [2024-07-12 06:43:31.250904] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:51.815 [2024-07-12 06:43:31.250932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.815 [2024-07-12 06:43:31.250944] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:53.716 [2024-07-12 06:43:33.250988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:53.716 [2024-07-12 06:43:33.251036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:53.716 [2024-07-12 06:43:33.251049] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:53.716 [2024-07-12 06:43:33.251059] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:53.716 [2024-07-12 06:43:33.251093] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:54.652 00:18:54.652 Latency(us) 00:18:54.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.652 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:54.652 Verification LBA range: start 0x0 length 0x4000 00:18:54.652 NVMe0n1 : 8.16 1665.72 6.51 15.69 0.00 76023.61 3604.48 7015926.69 00:18:54.652 =================================================================================================================== 00:18:54.652 Total : 1665.72 6.51 15.69 0.00 76023.61 3604.48 7015926.69 00:18:54.652 0 00:18:54.910 06:43:34 -- host/timeout.sh@62 -- # get_controller 00:18:54.910 06:43:34 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:54.910 06:43:34 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:55.477 06:43:35 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:55.477 06:43:35 -- host/timeout.sh@63 -- # get_bdev 00:18:55.477 06:43:35 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:55.477 06:43:35 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:55.477 06:43:35 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:55.477 06:43:35 -- host/timeout.sh@65 -- # wait 85137 00:18:55.477 06:43:35 -- host/timeout.sh@67 -- # killprocess 85118 00:18:55.477 06:43:35 -- common/autotest_common.sh@926 -- # '[' -z 85118 ']' 00:18:55.477 06:43:35 -- common/autotest_common.sh@930 -- # kill -0 85118 00:18:55.477 06:43:35 -- common/autotest_common.sh@931 -- # uname 00:18:55.477 06:43:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:55.477 06:43:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85118 00:18:55.477 06:43:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:55.477 06:43:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:55.477 06:43:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85118' 00:18:55.477 killing process with pid 85118 00:18:55.477 06:43:35 -- common/autotest_common.sh@945 -- # kill 85118 00:18:55.477 Received shutdown signal, test time was about 9.285579 seconds 00:18:55.477 00:18:55.477 Latency(us) 00:18:55.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.477 =================================================================================================================== 00:18:55.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.477 06:43:35 -- common/autotest_common.sh@950 -- # wait 85118 00:18:55.735 06:43:35 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.012 [2024-07-12 06:43:35.727534] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.012 06:43:35 -- host/timeout.sh@74 -- # bdevperf_pid=85267 00:18:56.012 06:43:35 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:56.012 06:43:35 -- host/timeout.sh@76 -- # waitforlisten 85267 /var/tmp/bdevperf.sock 00:18:56.012 06:43:35 -- common/autotest_common.sh@819 -- # '[' -z 85267 ']' 00:18:56.012 06:43:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.012 06:43:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:56.012 06:43:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.012 06:43:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:56.012 06:43:35 -- common/autotest_common.sh@10 -- # set +x 00:18:56.012 [2024-07-12 06:43:35.788519] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:56.012 [2024-07-12 06:43:35.788591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85267 ] 00:18:56.277 [2024-07-12 06:43:35.922577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.277 [2024-07-12 06:43:35.957823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.844 06:43:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:56.844 06:43:36 -- common/autotest_common.sh@852 -- # return 0 00:18:56.844 06:43:36 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:57.103 06:43:36 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:57.362 NVMe0n1 00:18:57.362 06:43:37 -- host/timeout.sh@84 -- # rpc_pid=85290 00:18:57.362 06:43:37 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.362 06:43:37 -- host/timeout.sh@86 -- # sleep 1 00:18:57.620 Running I/O for 10 seconds... 00:18:58.555 06:43:38 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.816 [2024-07-12 06:43:38.496314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099930 is same with the state(5) to be set 00:18:58.816 [2024-07-12 06:43:38.496606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.496964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.496976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.816 [2024-07-12 06:43:38.496988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.497000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.497010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.497021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.497031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.816 [2024-07-12 06:43:38.497055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.816 [2024-07-12 06:43:38.497067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.817 [2024-07-12 06:43:38.497288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.817 [2024-07-12 06:43:38.497329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.817 [2024-07-12 06:43:38.497372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.817 [2024-07-12 06:43:38.497393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.817 [2024-07-12 06:43:38.497414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.817 [2024-07-12 06:43:38.497456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.817 [2024-07-12 06:43:38.497519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.817 [2024-07-12 06:43:38.497751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.817 [2024-07-12 06:43:38.497763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.817 [2024-07-12 06:43:38.497772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.497783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.497793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.497805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.497815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.497827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.497840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.497852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.497862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.497873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.818 [2024-07-12 06:43:38.497883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.497894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.497904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.497915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.818 [2024-07-12 06:43:38.497925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.497936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.497946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.497975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.818 [2024-07-12 06:43:38.497987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.497998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.818 [2024-07-12 06:43:38.498282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.818 [2024-07-12 06:43:38.498303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.818 [2024-07-12 06:43:38.498366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.818 [2024-07-12 06:43:38.498437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.818 [2024-07-12 06:43:38.498465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.818 [2024-07-12 06:43:38.498486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.818 [2024-07-12 06:43:38.498497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.819 [2024-07-12 06:43:38.498507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.819 [2024-07-12 06:43:38.498527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.819 [2024-07-12 06:43:38.498549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.819 [2024-07-12 06:43:38.498590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.819 [2024-07-12 06:43:38.498900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.498941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.819 [2024-07-12 06:43:38.498979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.498991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.499001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.499012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.499021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.499033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.819 [2024-07-12 06:43:38.499042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.499053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.819 [2024-07-12 06:43:38.499063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.499074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.819 [2024-07-12 06:43:38.499084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.499095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.819 [2024-07-12 06:43:38.499107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.499118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.499128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.499139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.499149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.499160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.819 [2024-07-12 06:43:38.499170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.499183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.819 [2024-07-12 06:43:38.499192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.819 [2024-07-12 06:43:38.499203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.820 [2024-07-12 06:43:38.499213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.820 [2024-07-12 06:43:38.499224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.820 [2024-07-12 06:43:38.499234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.820 [2024-07-12 06:43:38.499245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.820 [2024-07-12 06:43:38.499255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.820 [2024-07-12 06:43:38.499266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.820 [2024-07-12 06:43:38.499276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.820 [2024-07-12 06:43:38.499287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.820 [2024-07-12 06:43:38.499297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.820 [2024-07-12 06:43:38.499308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.820 [2024-07-12 06:43:38.499318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.820 [2024-07-12 06:43:38.499329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.820 [2024-07-12 06:43:38.499339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.820 [2024-07-12 06:43:38.499350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.820 [2024-07-12 06:43:38.499359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.820 [2024-07-12 06:43:38.499371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.820 [2024-07-12 06:43:38.499380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.820 [2024-07-12 06:43:38.499391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.820 [2024-07-12 06:43:38.499401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.820 [2024-07-12 06:43:38.499411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a440 is same with the state(5) to be set 00:18:58.820 [2024-07-12 06:43:38.499423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.820 [2024-07-12 06:43:38.499430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.820 [2024-07-12 06:43:38.499440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113704 len:8 PRP1 0x0 PRP2 0x0 00:18:58.820 [2024-07-12 06:43:38.499450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.820 [2024-07-12 06:43:38.499493] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa1a440 was disconnected and freed. reset controller. 00:18:58.820 [2024-07-12 06:43:38.499747] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.820 [2024-07-12 06:43:38.499835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1f3a0 (9): Bad file descriptor 00:18:58.820 [2024-07-12 06:43:38.499939] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.820 [2024-07-12 06:43:38.500021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.820 [2024-07-12 06:43:38.500068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.820 [2024-07-12 06:43:38.500084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1f3a0 with addr=10.0.0.2, port=4420 00:18:58.820 [2024-07-12 06:43:38.500095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1f3a0 is same with the state(5) to be set 00:18:58.820 [2024-07-12 06:43:38.500115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1f3a0 (9): Bad file descriptor 00:18:58.820 [2024-07-12 06:43:38.500133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:58.820 [2024-07-12 06:43:38.500143] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:58.820 [2024-07-12 06:43:38.500154] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.820 [2024-07-12 06:43:38.500176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:58.820 [2024-07-12 06:43:38.500188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.820 06:43:38 -- host/timeout.sh@90 -- # sleep 1 00:18:59.756 [2024-07-12 06:43:39.500302] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.756 [2024-07-12 06:43:39.500398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.756 [2024-07-12 06:43:39.500444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.756 [2024-07-12 06:43:39.500461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1f3a0 with addr=10.0.0.2, port=4420 00:18:59.756 [2024-07-12 06:43:39.500474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1f3a0 is same with the state(5) to be set 00:18:59.756 [2024-07-12 06:43:39.500500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1f3a0 (9): Bad file descriptor 00:18:59.756 [2024-07-12 06:43:39.500521] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:59.756 [2024-07-12 06:43:39.500531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:59.756 [2024-07-12 06:43:39.500543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:59.756 [2024-07-12 06:43:39.500571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:59.756 [2024-07-12 06:43:39.500584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:59.756 06:43:39 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.014 [2024-07-12 06:43:39.761234] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.014 06:43:39 -- host/timeout.sh@92 -- # wait 85290 00:19:00.948 [2024-07-12 06:43:40.517933] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:07.510 00:19:07.510 Latency(us) 00:19:07.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.510 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:07.510 Verification LBA range: start 0x0 length 0x4000 00:19:07.510 NVMe0n1 : 10.01 8959.37 35.00 0.00 0.00 14260.09 930.91 3019898.88 00:19:07.510 =================================================================================================================== 00:19:07.510 Total : 8959.37 35.00 0.00 0.00 14260.09 930.91 3019898.88 00:19:07.510 0 00:19:07.510 06:43:47 -- host/timeout.sh@97 -- # rpc_pid=85395 00:19:07.510 06:43:47 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:07.510 06:43:47 -- host/timeout.sh@98 -- # sleep 1 00:19:07.769 Running I/O for 10 seconds... 00:19:08.705 06:43:48 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.965 [2024-07-12 06:43:48.688985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10989a0 is same with the state(5) to be set 00:19:08.966 [2024-07-12 06:43:48.689374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.966 [2024-07-12 06:43:48.689950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.966 [2024-07-12 06:43:48.689974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.689986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.689996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.967 [2024-07-12 06:43:48.690144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.967 [2024-07-12 06:43:48.690186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.967 [2024-07-12 06:43:48.690207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.967 [2024-07-12 06:43:48.690270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.967 [2024-07-12 06:43:48.690312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.967 [2024-07-12 06:43:48.690501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.967 [2024-07-12 06:43:48.690522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.967 [2024-07-12 06:43:48.690543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.967 [2024-07-12 06:43:48.690564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.967 [2024-07-12 06:43:48.690626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.967 [2024-07-12 06:43:48.690877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.967 [2024-07-12 06:43:48.690889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.690898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.690910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.690919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.690930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.690940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.690951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.690970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.690982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.690992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.691034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.691302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.691344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.691365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.691427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.691449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.691470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.691513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.691576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.691766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.968 [2024-07-12 06:43:48.691787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.968 [2024-07-12 06:43:48.691798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.968 [2024-07-12 06:43:48.691808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.691819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.969 [2024-07-12 06:43:48.691829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.691840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.969 [2024-07-12 06:43:48.691850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.691861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.691871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.691882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.691892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.691903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.691912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.691924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.691934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.691945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.691967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.691980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.969 [2024-07-12 06:43:48.691990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.692011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.692031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.692052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.692073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.692095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.692116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.692136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.969 [2024-07-12 06:43:48.692158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3f6e0 is same with the state(5) to be set 00:19:08.969 [2024-07-12 06:43:48.692181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:08.969 [2024-07-12 06:43:48.692189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:08.969 [2024-07-12 06:43:48.692197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126320 len:8 PRP1 0x0 PRP2 0x0 00:19:08.969 [2024-07-12 06:43:48.692207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692250] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa3f6e0 was disconnected and freed. reset controller. 00:19:08.969 [2024-07-12 06:43:48.692324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.969 [2024-07-12 06:43:48.692341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.969 [2024-07-12 06:43:48.692361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.969 [2024-07-12 06:43:48.692388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.969 [2024-07-12 06:43:48.692407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.969 [2024-07-12 06:43:48.692416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1f3a0 is same with the state(5) to be set 00:19:08.969 [2024-07-12 06:43:48.692637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:08.969 [2024-07-12 06:43:48.692671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1f3a0 (9): Bad file descriptor 00:19:08.969 [2024-07-12 06:43:48.692768] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:08.969 [2024-07-12 06:43:48.692833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:08.969 [2024-07-12 06:43:48.692876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:08.969 [2024-07-12 06:43:48.692892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1f3a0 with addr=10.0.0.2, port=4420 00:19:08.969 [2024-07-12 06:43:48.692903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1f3a0 is same with the state(5) to be set 00:19:08.969 [2024-07-12 06:43:48.692922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1f3a0 (9): Bad file descriptor 00:19:08.969 [2024-07-12 06:43:48.692939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:08.969 [2024-07-12 06:43:48.692949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:08.969 [2024-07-12 06:43:48.692978] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:08.969 [2024-07-12 06:43:48.693002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:08.969 [2024-07-12 06:43:48.693020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:08.969 06:43:48 -- host/timeout.sh@101 -- # sleep 3 00:19:09.905 [2024-07-12 06:43:49.693188] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.905 [2024-07-12 06:43:49.693335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.905 [2024-07-12 06:43:49.693380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.905 [2024-07-12 06:43:49.693397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1f3a0 with addr=10.0.0.2, port=4420 00:19:09.905 [2024-07-12 06:43:49.693411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1f3a0 is same with the state(5) to be set 00:19:09.905 [2024-07-12 06:43:49.693439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1f3a0 (9): Bad file descriptor 00:19:09.905 [2024-07-12 06:43:49.693461] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:09.905 [2024-07-12 06:43:49.693471] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:09.905 [2024-07-12 06:43:49.693497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:09.905 [2024-07-12 06:43:49.693542] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:09.905 [2024-07-12 06:43:49.693555] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:10.840 [2024-07-12 06:43:50.693698] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.840 [2024-07-12 06:43:50.693799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.840 [2024-07-12 06:43:50.693846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.840 [2024-07-12 06:43:50.693864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1f3a0 with addr=10.0.0.2, port=4420 00:19:10.840 [2024-07-12 06:43:50.693877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1f3a0 is same with the state(5) to be set 00:19:10.840 [2024-07-12 06:43:50.693904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1f3a0 (9): Bad file descriptor 00:19:10.840 [2024-07-12 06:43:50.693925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:10.840 [2024-07-12 06:43:50.693935] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:10.840 [2024-07-12 06:43:50.693946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:10.840 [2024-07-12 06:43:50.693987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:10.840 [2024-07-12 06:43:50.694002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:11.787 [2024-07-12 06:43:51.696092] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.787 [2024-07-12 06:43:51.696266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.787 [2024-07-12 06:43:51.696312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.787 [2024-07-12 06:43:51.696330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1f3a0 with addr=10.0.0.2, port=4420 00:19:11.787 [2024-07-12 06:43:51.696343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1f3a0 is same with the state(5) to be set 00:19:11.787 [2024-07-12 06:43:51.696518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1f3a0 (9): Bad file descriptor 00:19:11.787 [2024-07-12 06:43:51.696687] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:11.787 [2024-07-12 06:43:51.696702] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:11.787 [2024-07-12 06:43:51.696714] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:11.787 [2024-07-12 06:43:51.699148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:11.787 [2024-07-12 06:43:51.699180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:12.045 06:43:51 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.045 [2024-07-12 06:43:51.957791] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.304 06:43:51 -- host/timeout.sh@103 -- # wait 85395 00:19:12.870 [2024-07-12 06:43:52.734662] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:18.135 00:19:18.135 Latency(us) 00:19:18.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.135 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.135 Verification LBA range: start 0x0 length 0x4000 00:19:18.135 NVMe0n1 : 10.01 7263.50 28.37 5345.71 0.00 10132.13 525.03 3019898.88 00:19:18.135 =================================================================================================================== 00:19:18.135 Total : 7263.50 28.37 5345.71 0.00 10132.13 0.00 3019898.88 00:19:18.135 0 00:19:18.135 06:43:57 -- host/timeout.sh@105 -- # killprocess 85267 00:19:18.135 06:43:57 -- common/autotest_common.sh@926 -- # '[' -z 85267 ']' 00:19:18.135 06:43:57 -- common/autotest_common.sh@930 -- # kill -0 85267 00:19:18.135 06:43:57 -- common/autotest_common.sh@931 -- # uname 00:19:18.135 06:43:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:18.135 06:43:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85267 00:19:18.135 killing process with pid 85267 00:19:18.135 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.135 00:19:18.135 Latency(us) 00:19:18.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.135 =================================================================================================================== 00:19:18.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.135 06:43:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:18.135 06:43:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:18.135 06:43:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85267' 00:19:18.135 06:43:57 -- common/autotest_common.sh@945 -- # kill 85267 00:19:18.135 06:43:57 -- common/autotest_common.sh@950 -- # wait 85267 00:19:18.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.135 06:43:57 -- host/timeout.sh@110 -- # bdevperf_pid=85509 00:19:18.135 06:43:57 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:18.135 06:43:57 -- host/timeout.sh@112 -- # waitforlisten 85509 /var/tmp/bdevperf.sock 00:19:18.135 06:43:57 -- common/autotest_common.sh@819 -- # '[' -z 85509 ']' 00:19:18.135 06:43:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.135 06:43:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:18.135 06:43:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.135 06:43:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:18.135 06:43:57 -- common/autotest_common.sh@10 -- # set +x 00:19:18.135 [2024-07-12 06:43:57.755283] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:18.135 [2024-07-12 06:43:57.755371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85509 ] 00:19:18.135 [2024-07-12 06:43:57.895513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.135 [2024-07-12 06:43:57.930205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.069 06:43:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:19.069 06:43:58 -- common/autotest_common.sh@852 -- # return 0 00:19:19.069 06:43:58 -- host/timeout.sh@116 -- # dtrace_pid=85525 00:19:19.069 06:43:58 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 85509 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:19.069 06:43:58 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:19.069 06:43:58 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:19.636 NVMe0n1 00:19:19.636 06:43:59 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:19.636 06:43:59 -- host/timeout.sh@124 -- # rpc_pid=85567 00:19:19.636 06:43:59 -- host/timeout.sh@125 -- # sleep 1 00:19:19.636 Running I/O for 10 seconds... 00:19:20.571 06:44:00 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.832 [2024-07-12 06:44:00.517206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.832 [2024-07-12 06:44:00.517262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.832 [2024-07-12 06:44:00.517274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.832 [2024-07-12 06:44:00.517283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.832 [2024-07-12 06:44:00.517292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.517999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.833 [2024-07-12 06:44:00.518007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250620 is same with the state(5) to be set 00:19:20.834 [2024-07-12 06:44:00.518370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.834 [2024-07-12 06:44:00.518949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.834 [2024-07-12 06:44:00.518973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.518986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.518996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.835 [2024-07-12 06:44:00.519872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.835 [2024-07-12 06:44:00.519883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.519892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.519904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.519913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.519925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.519934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.519946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.519965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.519978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.519988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.836 [2024-07-12 06:44:00.520803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.836 [2024-07-12 06:44:00.520812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.520824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.520833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.520847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.520858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.520870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.520880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.520891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.520901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.520912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.520922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.520933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.520943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.520964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.520975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.520987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.520996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.521017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.521038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.521059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.521079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.521100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.521123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.521147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.837 [2024-07-12 06:44:00.521170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cd320 is same with the state(5) to be set 00:19:20.837 [2024-07-12 06:44:00.521195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.837 [2024-07-12 06:44:00.521205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.837 [2024-07-12 06:44:00.521216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112320 len:8 PRP1 0x0 PRP2 0x0 00:19:20.837 [2024-07-12 06:44:00.521226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521271] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24cd320 was disconnected and freed. reset controller. 00:19:20.837 [2024-07-12 06:44:00.521421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.837 [2024-07-12 06:44:00.521441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.837 [2024-07-12 06:44:00.521462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.837 [2024-07-12 06:44:00.521482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.837 [2024-07-12 06:44:00.521502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.837 [2024-07-12 06:44:00.521511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d23a0 is same with the state(5) to be set 00:19:20.837 [2024-07-12 06:44:00.521774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:20.837 [2024-07-12 06:44:00.521807] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d23a0 (9): Bad file descriptor 00:19:20.837 [2024-07-12 06:44:00.521914] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.837 [2024-07-12 06:44:00.522005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.837 [2024-07-12 06:44:00.522054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.837 [2024-07-12 06:44:00.522072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d23a0 with addr=10.0.0.2, port=4420 00:19:20.837 [2024-07-12 06:44:00.522083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d23a0 is same with the state(5) to be set 00:19:20.837 [2024-07-12 06:44:00.522103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d23a0 (9): Bad file descriptor 00:19:20.837 [2024-07-12 06:44:00.522120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:20.837 [2024-07-12 06:44:00.522130] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:20.837 06:44:00 -- host/timeout.sh@128 -- # wait 85567 00:19:20.837 [2024-07-12 06:44:00.538110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:20.837 [2024-07-12 06:44:00.538177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:20.837 [2024-07-12 06:44:00.538198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.738 [2024-07-12 06:44:02.538377] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.738 [2024-07-12 06:44:02.538491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.738 [2024-07-12 06:44:02.538539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.738 [2024-07-12 06:44:02.538557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d23a0 with addr=10.0.0.2, port=4420 00:19:22.738 [2024-07-12 06:44:02.538570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d23a0 is same with the state(5) to be set 00:19:22.738 [2024-07-12 06:44:02.538598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d23a0 (9): Bad file descriptor 00:19:22.738 [2024-07-12 06:44:02.538619] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:22.738 [2024-07-12 06:44:02.538630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:22.738 [2024-07-12 06:44:02.538652] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:22.738 [2024-07-12 06:44:02.538681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:22.738 [2024-07-12 06:44:02.538694] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:24.636 [2024-07-12 06:44:04.538834] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.636 [2024-07-12 06:44:04.538931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.636 [2024-07-12 06:44:04.538993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.636 [2024-07-12 06:44:04.539012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d23a0 with addr=10.0.0.2, port=4420 00:19:24.636 [2024-07-12 06:44:04.539027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d23a0 is same with the state(5) to be set 00:19:24.636 [2024-07-12 06:44:04.539052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d23a0 (9): Bad file descriptor 00:19:24.636 [2024-07-12 06:44:04.539072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:24.636 [2024-07-12 06:44:04.539082] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:24.636 [2024-07-12 06:44:04.539093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:24.636 [2024-07-12 06:44:04.539121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:24.636 [2024-07-12 06:44:04.539134] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:27.168 [2024-07-12 06:44:06.539190] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:27.168 [2024-07-12 06:44:06.539235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:27.168 [2024-07-12 06:44:06.539247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:27.168 [2024-07-12 06:44:06.539258] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:27.168 [2024-07-12 06:44:06.539286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:27.802 00:19:27.802 Latency(us) 00:19:27.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.802 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:27.802 NVMe0n1 : 8.13 1921.01 7.50 15.74 0.00 65978.65 7923.90 7046430.72 00:19:27.802 =================================================================================================================== 00:19:27.802 Total : 1921.01 7.50 15.74 0.00 65978.65 7923.90 7046430.72 00:19:27.802 0 00:19:27.802 06:44:07 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:27.802 Attaching 5 probes... 00:19:27.802 1291.911991: reset bdev controller NVMe0 00:19:27.802 1291.994567: reconnect bdev controller NVMe0 00:19:27.802 3308.397062: reconnect delay bdev controller NVMe0 00:19:27.802 3308.416518: reconnect bdev controller NVMe0 00:19:27.802 5308.874608: reconnect delay bdev controller NVMe0 00:19:27.802 5308.894508: reconnect bdev controller NVMe0 00:19:27.802 7309.312118: reconnect delay bdev controller NVMe0 00:19:27.802 7309.331909: reconnect bdev controller NVMe0 00:19:27.802 06:44:07 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:27.802 06:44:07 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:27.802 06:44:07 -- host/timeout.sh@136 -- # kill 85525 00:19:27.802 06:44:07 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:27.802 06:44:07 -- host/timeout.sh@139 -- # killprocess 85509 00:19:27.802 06:44:07 -- common/autotest_common.sh@926 -- # '[' -z 85509 ']' 00:19:27.802 06:44:07 -- common/autotest_common.sh@930 -- # kill -0 85509 00:19:27.802 06:44:07 -- common/autotest_common.sh@931 -- # uname 00:19:27.802 06:44:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:27.802 06:44:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85509 00:19:27.802 killing process with pid 85509 00:19:27.802 Received shutdown signal, test time was about 8.184755 seconds 00:19:27.802 00:19:27.802 Latency(us) 00:19:27.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.802 =================================================================================================================== 00:19:27.802 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.802 06:44:07 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:27.802 06:44:07 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:27.802 06:44:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85509' 00:19:27.802 06:44:07 -- common/autotest_common.sh@945 -- # kill 85509 00:19:27.802 06:44:07 -- common/autotest_common.sh@950 -- # wait 85509 00:19:27.802 06:44:07 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.060 06:44:07 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:28.060 06:44:07 -- host/timeout.sh@145 -- # nvmftestfini 00:19:28.060 06:44:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:28.061 06:44:07 -- nvmf/common.sh@116 -- # sync 00:19:28.319 06:44:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:28.319 06:44:08 -- nvmf/common.sh@119 -- # set +e 00:19:28.319 06:44:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:28.319 06:44:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:28.319 rmmod nvme_tcp 00:19:28.589 rmmod nvme_fabrics 00:19:28.589 rmmod nvme_keyring 00:19:28.589 06:44:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:28.589 06:44:08 -- nvmf/common.sh@123 -- # set -e 00:19:28.589 06:44:08 -- nvmf/common.sh@124 -- # return 0 00:19:28.589 06:44:08 -- nvmf/common.sh@477 -- # '[' -n 85063 ']' 00:19:28.589 06:44:08 -- nvmf/common.sh@478 -- # killprocess 85063 00:19:28.589 06:44:08 -- common/autotest_common.sh@926 -- # '[' -z 85063 ']' 00:19:28.589 06:44:08 -- common/autotest_common.sh@930 -- # kill -0 85063 00:19:28.589 06:44:08 -- common/autotest_common.sh@931 -- # uname 00:19:28.589 06:44:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:28.589 06:44:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85063 00:19:28.589 killing process with pid 85063 00:19:28.589 06:44:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:28.589 06:44:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:28.589 06:44:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85063' 00:19:28.589 06:44:08 -- common/autotest_common.sh@945 -- # kill 85063 00:19:28.589 06:44:08 -- common/autotest_common.sh@950 -- # wait 85063 00:19:28.852 06:44:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:28.852 06:44:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:28.852 06:44:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:28.852 06:44:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:28.852 06:44:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:28.852 06:44:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.852 06:44:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.852 06:44:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.852 06:44:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:28.852 00:19:28.852 real 0m47.082s 00:19:28.852 user 2m18.523s 00:19:28.852 sys 0m5.373s 00:19:28.852 06:44:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.852 06:44:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 ************************************ 00:19:28.852 END TEST nvmf_timeout 00:19:28.852 ************************************ 00:19:28.852 06:44:08 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:19:28.852 06:44:08 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:19:28.852 06:44:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:28.852 06:44:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 06:44:08 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:28.852 00:19:28.852 real 10m28.429s 00:19:28.852 user 29m22.706s 00:19:28.852 sys 3m19.798s 00:19:28.852 06:44:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.852 ************************************ 00:19:28.852 END TEST nvmf_tcp 00:19:28.852 ************************************ 00:19:28.852 06:44:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 06:44:08 -- spdk/autotest.sh@296 -- # [[ 1 -eq 0 ]] 00:19:28.852 06:44:08 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:28.852 06:44:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:28.852 06:44:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:28.852 06:44:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 ************************************ 00:19:28.852 START TEST nvmf_dif 00:19:28.852 ************************************ 00:19:28.852 06:44:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:28.852 * Looking for test storage... 00:19:28.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:28.852 06:44:08 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:28.852 06:44:08 -- nvmf/common.sh@7 -- # uname -s 00:19:29.109 06:44:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.109 06:44:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.109 06:44:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.109 06:44:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.109 06:44:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.109 06:44:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.109 06:44:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.109 06:44:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.109 06:44:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.109 06:44:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.109 06:44:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:19:29.109 06:44:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:19:29.109 06:44:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.109 06:44:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.109 06:44:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:29.109 06:44:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:29.109 06:44:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.109 06:44:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.109 06:44:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.109 06:44:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.109 06:44:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.109 06:44:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.109 06:44:08 -- paths/export.sh@5 -- # export PATH 00:19:29.109 06:44:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.109 06:44:08 -- nvmf/common.sh@46 -- # : 0 00:19:29.109 06:44:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:29.109 06:44:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:29.109 06:44:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:29.109 06:44:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.109 06:44:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.109 06:44:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:29.109 06:44:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:29.109 06:44:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:29.110 06:44:08 -- target/dif.sh@15 -- # NULL_META=16 00:19:29.110 06:44:08 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:29.110 06:44:08 -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:29.110 06:44:08 -- target/dif.sh@15 -- # NULL_DIF=1 00:19:29.110 06:44:08 -- target/dif.sh@135 -- # nvmftestinit 00:19:29.110 06:44:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:29.110 06:44:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.110 06:44:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:29.110 06:44:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:29.110 06:44:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:29.110 06:44:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.110 06:44:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:29.110 06:44:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.110 06:44:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:29.110 06:44:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:29.110 06:44:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:29.110 06:44:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:29.110 06:44:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:29.110 06:44:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:29.110 06:44:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.110 06:44:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.110 06:44:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:29.110 06:44:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:29.110 06:44:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:29.110 06:44:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:29.110 06:44:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:29.110 06:44:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.110 06:44:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:29.110 06:44:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:29.110 06:44:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:29.110 06:44:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:29.110 06:44:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:29.110 06:44:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:29.110 Cannot find device "nvmf_tgt_br" 00:19:29.110 06:44:08 -- nvmf/common.sh@154 -- # true 00:19:29.110 06:44:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:29.110 Cannot find device "nvmf_tgt_br2" 00:19:29.110 06:44:08 -- nvmf/common.sh@155 -- # true 00:19:29.110 06:44:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:29.110 06:44:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:29.110 Cannot find device "nvmf_tgt_br" 00:19:29.110 06:44:08 -- nvmf/common.sh@157 -- # true 00:19:29.110 06:44:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:29.110 Cannot find device "nvmf_tgt_br2" 00:19:29.110 06:44:08 -- nvmf/common.sh@158 -- # true 00:19:29.110 06:44:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:29.110 06:44:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:29.110 06:44:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:29.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:29.110 06:44:08 -- nvmf/common.sh@161 -- # true 00:19:29.110 06:44:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:29.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:29.110 06:44:08 -- nvmf/common.sh@162 -- # true 00:19:29.110 06:44:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:29.110 06:44:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:29.110 06:44:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:29.110 06:44:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:29.110 06:44:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:29.110 06:44:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:29.110 06:44:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:29.110 06:44:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:29.110 06:44:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:29.369 06:44:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:29.369 06:44:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:29.369 06:44:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:29.369 06:44:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:29.369 06:44:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:29.369 06:44:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:29.369 06:44:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:29.369 06:44:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:29.369 06:44:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:29.369 06:44:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:29.369 06:44:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:29.369 06:44:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:29.369 06:44:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:29.369 06:44:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:29.369 06:44:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:29.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:19:29.369 00:19:29.369 --- 10.0.0.2 ping statistics --- 00:19:29.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.369 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:19:29.369 06:44:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:29.369 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:29.369 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:19:29.369 00:19:29.369 --- 10.0.0.3 ping statistics --- 00:19:29.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.369 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:29.369 06:44:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:29.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:29.369 00:19:29.369 --- 10.0.0.1 ping statistics --- 00:19:29.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.369 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:29.369 06:44:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.369 06:44:09 -- nvmf/common.sh@421 -- # return 0 00:19:29.369 06:44:09 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:29.369 06:44:09 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:29.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:29.628 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:29.628 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:29.888 06:44:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.888 06:44:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:29.888 06:44:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:29.888 06:44:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.888 06:44:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:29.888 06:44:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:29.888 06:44:09 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:29.888 06:44:09 -- target/dif.sh@137 -- # nvmfappstart 00:19:29.888 06:44:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:29.888 06:44:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:29.888 06:44:09 -- common/autotest_common.sh@10 -- # set +x 00:19:29.888 06:44:09 -- nvmf/common.sh@469 -- # nvmfpid=86023 00:19:29.888 06:44:09 -- nvmf/common.sh@470 -- # waitforlisten 86023 00:19:29.888 06:44:09 -- common/autotest_common.sh@819 -- # '[' -z 86023 ']' 00:19:29.888 06:44:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.888 06:44:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:29.888 06:44:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:29.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.888 06:44:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.888 06:44:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:29.888 06:44:09 -- common/autotest_common.sh@10 -- # set +x 00:19:29.888 [2024-07-12 06:44:09.643332] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:29.888 [2024-07-12 06:44:09.643449] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.888 [2024-07-12 06:44:09.787758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.146 [2024-07-12 06:44:09.837552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:30.146 [2024-07-12 06:44:09.838057] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.146 [2024-07-12 06:44:09.838259] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.146 [2024-07-12 06:44:09.838406] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.146 [2024-07-12 06:44:09.838476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.082 06:44:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:31.082 06:44:10 -- common/autotest_common.sh@852 -- # return 0 00:19:31.082 06:44:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:31.082 06:44:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:31.082 06:44:10 -- common/autotest_common.sh@10 -- # set +x 00:19:31.082 06:44:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.082 06:44:10 -- target/dif.sh@139 -- # create_transport 00:19:31.082 06:44:10 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:31.082 06:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.082 06:44:10 -- common/autotest_common.sh@10 -- # set +x 00:19:31.082 [2024-07-12 06:44:10.778030] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.082 06:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.082 06:44:10 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:31.082 06:44:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:31.082 06:44:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:31.082 06:44:10 -- common/autotest_common.sh@10 -- # set +x 00:19:31.082 ************************************ 00:19:31.082 START TEST fio_dif_1_default 00:19:31.082 ************************************ 00:19:31.082 06:44:10 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:19:31.082 06:44:10 -- target/dif.sh@86 -- # create_subsystems 0 00:19:31.082 06:44:10 -- target/dif.sh@28 -- # local sub 00:19:31.082 06:44:10 -- target/dif.sh@30 -- # for sub in "$@" 00:19:31.082 06:44:10 -- target/dif.sh@31 -- # create_subsystem 0 00:19:31.082 06:44:10 -- target/dif.sh@18 -- # local sub_id=0 00:19:31.082 06:44:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:31.082 06:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.082 06:44:10 -- common/autotest_common.sh@10 -- # set +x 00:19:31.082 bdev_null0 00:19:31.082 06:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.082 06:44:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:31.082 06:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.082 06:44:10 -- common/autotest_common.sh@10 -- # set +x 00:19:31.082 06:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.082 06:44:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:31.082 06:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.082 06:44:10 -- common/autotest_common.sh@10 -- # set +x 00:19:31.082 06:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.082 06:44:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:31.082 06:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.082 06:44:10 -- common/autotest_common.sh@10 -- # set +x 00:19:31.082 [2024-07-12 06:44:10.822076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.082 06:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.082 06:44:10 -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:31.082 06:44:10 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:31.082 06:44:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.082 06:44:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:31.082 06:44:10 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.082 06:44:10 -- target/dif.sh@82 -- # gen_fio_conf 00:19:31.082 06:44:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:31.082 06:44:10 -- target/dif.sh@54 -- # local file 00:19:31.082 06:44:10 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:31.082 06:44:10 -- target/dif.sh@56 -- # cat 00:19:31.082 06:44:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:31.082 06:44:10 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.082 06:44:10 -- common/autotest_common.sh@1320 -- # shift 00:19:31.082 06:44:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:31.082 06:44:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.082 06:44:10 -- nvmf/common.sh@520 -- # config=() 00:19:31.082 06:44:10 -- nvmf/common.sh@520 -- # local subsystem config 00:19:31.082 06:44:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:31.082 06:44:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:31.082 { 00:19:31.082 "params": { 00:19:31.082 "name": "Nvme$subsystem", 00:19:31.082 "trtype": "$TEST_TRANSPORT", 00:19:31.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.082 "adrfam": "ipv4", 00:19:31.082 "trsvcid": "$NVMF_PORT", 00:19:31.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.082 "hdgst": ${hdgst:-false}, 00:19:31.082 "ddgst": ${ddgst:-false} 00:19:31.082 }, 00:19:31.082 "method": "bdev_nvme_attach_controller" 00:19:31.082 } 00:19:31.082 EOF 00:19:31.082 )") 00:19:31.082 06:44:10 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.082 06:44:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:31.082 06:44:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:31.082 06:44:10 -- target/dif.sh@72 -- # (( file <= files )) 00:19:31.082 06:44:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:31.082 06:44:10 -- nvmf/common.sh@542 -- # cat 00:19:31.082 06:44:10 -- nvmf/common.sh@544 -- # jq . 00:19:31.082 06:44:10 -- nvmf/common.sh@545 -- # IFS=, 00:19:31.082 06:44:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:31.082 "params": { 00:19:31.082 "name": "Nvme0", 00:19:31.082 "trtype": "tcp", 00:19:31.082 "traddr": "10.0.0.2", 00:19:31.082 "adrfam": "ipv4", 00:19:31.082 "trsvcid": "4420", 00:19:31.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:31.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:31.082 "hdgst": false, 00:19:31.082 "ddgst": false 00:19:31.082 }, 00:19:31.082 "method": "bdev_nvme_attach_controller" 00:19:31.082 }' 00:19:31.082 06:44:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:31.082 06:44:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:31.082 06:44:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.082 06:44:10 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.082 06:44:10 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:31.082 06:44:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:31.082 06:44:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:31.082 06:44:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:31.082 06:44:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:31.082 06:44:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.341 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:31.341 fio-3.35 00:19:31.341 Starting 1 thread 00:19:31.610 [2024-07-12 06:44:11.357007] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:31.610 [2024-07-12 06:44:11.357096] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:41.580 00:19:41.580 filename0: (groupid=0, jobs=1): err= 0: pid=86095: Fri Jul 12 06:44:21 2024 00:19:41.580 read: IOPS=8068, BW=31.5MiB/s (33.0MB/s)(315MiB/10001msec) 00:19:41.580 slat (nsec): min=6585, max=55648, avg=9464.88, stdev=4050.49 00:19:41.580 clat (usec): min=379, max=4531, avg=468.02, stdev=47.84 00:19:41.580 lat (usec): min=386, max=4559, avg=477.48, stdev=48.53 00:19:41.580 clat percentiles (usec): 00:19:41.580 | 1.00th=[ 404], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 437], 00:19:41.580 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 461], 60.00th=[ 474], 00:19:41.580 | 70.00th=[ 486], 80.00th=[ 498], 90.00th=[ 519], 95.00th=[ 537], 00:19:41.580 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 627], 99.95th=[ 644], 00:19:41.580 | 99.99th=[ 758] 00:19:41.580 bw ( KiB/s): min=30880, max=32992, per=100.00%, avg=32296.00, stdev=432.76, samples=19 00:19:41.580 iops : min= 7720, max= 8248, avg=8074.00, stdev=108.44, samples=19 00:19:41.580 lat (usec) : 500=80.90%, 750=19.09%, 1000=0.01% 00:19:41.580 lat (msec) : 10=0.01% 00:19:41.580 cpu : usr=85.42%, sys=12.65%, ctx=48, majf=0, minf=9 00:19:41.580 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.580 issued rwts: total=80692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.580 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:41.580 00:19:41.580 Run status group 0 (all jobs): 00:19:41.580 READ: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=315MiB (331MB), run=10001-10001msec 00:19:41.839 06:44:21 -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:41.839 06:44:21 -- target/dif.sh@43 -- # local sub 00:19:41.839 06:44:21 -- target/dif.sh@45 -- # for sub in "$@" 00:19:41.839 06:44:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:41.839 06:44:21 -- target/dif.sh@36 -- # local sub_id=0 00:19:41.839 06:44:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:41.839 06:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.839 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 06:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.839 06:44:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:41.839 06:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.839 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 06:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.839 00:19:41.839 real 0m10.860s 00:19:41.839 user 0m9.076s 00:19:41.839 sys 0m1.508s 00:19:41.839 06:44:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.839 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 ************************************ 00:19:41.839 END TEST fio_dif_1_default 00:19:41.839 ************************************ 00:19:41.839 06:44:21 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:41.839 06:44:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:41.839 06:44:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:41.839 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 ************************************ 00:19:41.839 START TEST fio_dif_1_multi_subsystems 00:19:41.839 ************************************ 00:19:41.839 06:44:21 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:19:41.839 06:44:21 -- target/dif.sh@92 -- # local files=1 00:19:41.839 06:44:21 -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:41.839 06:44:21 -- target/dif.sh@28 -- # local sub 00:19:41.839 06:44:21 -- target/dif.sh@30 -- # for sub in "$@" 00:19:41.839 06:44:21 -- target/dif.sh@31 -- # create_subsystem 0 00:19:41.839 06:44:21 -- target/dif.sh@18 -- # local sub_id=0 00:19:41.839 06:44:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:41.839 06:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.839 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 bdev_null0 00:19:41.839 06:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.839 06:44:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:41.839 06:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.839 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 06:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.839 06:44:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:41.839 06:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.839 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 06:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.839 06:44:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:41.839 06:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.839 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 [2024-07-12 06:44:21.736478] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.839 06:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.839 06:44:21 -- target/dif.sh@30 -- # for sub in "$@" 00:19:41.839 06:44:21 -- target/dif.sh@31 -- # create_subsystem 1 00:19:41.839 06:44:21 -- target/dif.sh@18 -- # local sub_id=1 00:19:41.839 06:44:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:41.839 06:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.839 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 bdev_null1 00:19:41.839 06:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.839 06:44:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:41.839 06:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.839 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 06:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.839 06:44:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:41.839 06:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.839 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:42.100 06:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:42.100 06:44:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.100 06:44:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:42.100 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:19:42.100 06:44:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:42.100 06:44:21 -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:42.100 06:44:21 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:42.100 06:44:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:42.100 06:44:21 -- nvmf/common.sh@520 -- # config=() 00:19:42.100 06:44:21 -- nvmf/common.sh@520 -- # local subsystem config 00:19:42.100 06:44:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:42.100 06:44:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:42.100 { 00:19:42.100 "params": { 00:19:42.100 "name": "Nvme$subsystem", 00:19:42.100 "trtype": "$TEST_TRANSPORT", 00:19:42.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.100 "adrfam": "ipv4", 00:19:42.100 "trsvcid": "$NVMF_PORT", 00:19:42.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.100 "hdgst": ${hdgst:-false}, 00:19:42.100 "ddgst": ${ddgst:-false} 00:19:42.100 }, 00:19:42.100 "method": "bdev_nvme_attach_controller" 00:19:42.100 } 00:19:42.100 EOF 00:19:42.100 )") 00:19:42.100 06:44:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:42.100 06:44:21 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:42.100 06:44:21 -- target/dif.sh@82 -- # gen_fio_conf 00:19:42.100 06:44:21 -- target/dif.sh@54 -- # local file 00:19:42.100 06:44:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:42.100 06:44:21 -- target/dif.sh@56 -- # cat 00:19:42.100 06:44:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:42.100 06:44:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:42.100 06:44:21 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:42.100 06:44:21 -- common/autotest_common.sh@1320 -- # shift 00:19:42.100 06:44:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:42.100 06:44:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:42.100 06:44:21 -- nvmf/common.sh@542 -- # cat 00:19:42.100 06:44:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:42.100 06:44:21 -- target/dif.sh@72 -- # (( file <= files )) 00:19:42.100 06:44:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:42.100 06:44:21 -- target/dif.sh@73 -- # cat 00:19:42.100 06:44:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:42.100 06:44:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:42.100 06:44:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:42.100 06:44:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:42.100 { 00:19:42.100 "params": { 00:19:42.100 "name": "Nvme$subsystem", 00:19:42.100 "trtype": "$TEST_TRANSPORT", 00:19:42.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.100 "adrfam": "ipv4", 00:19:42.100 "trsvcid": "$NVMF_PORT", 00:19:42.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.100 "hdgst": ${hdgst:-false}, 00:19:42.100 "ddgst": ${ddgst:-false} 00:19:42.100 }, 00:19:42.100 "method": "bdev_nvme_attach_controller" 00:19:42.100 } 00:19:42.100 EOF 00:19:42.100 )") 00:19:42.100 06:44:21 -- nvmf/common.sh@542 -- # cat 00:19:42.100 06:44:21 -- target/dif.sh@72 -- # (( file++ )) 00:19:42.100 06:44:21 -- target/dif.sh@72 -- # (( file <= files )) 00:19:42.100 06:44:21 -- nvmf/common.sh@544 -- # jq . 00:19:42.100 06:44:21 -- nvmf/common.sh@545 -- # IFS=, 00:19:42.100 06:44:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:42.100 "params": { 00:19:42.100 "name": "Nvme0", 00:19:42.100 "trtype": "tcp", 00:19:42.100 "traddr": "10.0.0.2", 00:19:42.100 "adrfam": "ipv4", 00:19:42.100 "trsvcid": "4420", 00:19:42.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:42.100 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:42.100 "hdgst": false, 00:19:42.100 "ddgst": false 00:19:42.100 }, 00:19:42.100 "method": "bdev_nvme_attach_controller" 00:19:42.100 },{ 00:19:42.100 "params": { 00:19:42.100 "name": "Nvme1", 00:19:42.100 "trtype": "tcp", 00:19:42.100 "traddr": "10.0.0.2", 00:19:42.100 "adrfam": "ipv4", 00:19:42.100 "trsvcid": "4420", 00:19:42.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.100 "hdgst": false, 00:19:42.100 "ddgst": false 00:19:42.100 }, 00:19:42.100 "method": "bdev_nvme_attach_controller" 00:19:42.100 }' 00:19:42.100 06:44:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:42.100 06:44:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:42.100 06:44:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:42.100 06:44:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:42.100 06:44:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:42.100 06:44:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:42.100 06:44:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:42.100 06:44:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:42.100 06:44:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:42.100 06:44:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:42.100 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:42.100 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:42.100 fio-3.35 00:19:42.100 Starting 2 threads 00:19:42.668 [2024-07-12 06:44:22.391620] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:42.668 [2024-07-12 06:44:22.391699] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:52.668 00:19:52.668 filename0: (groupid=0, jobs=1): err= 0: pid=86254: Fri Jul 12 06:44:32 2024 00:19:52.668 read: IOPS=4545, BW=17.8MiB/s (18.6MB/s)(178MiB/10001msec) 00:19:52.668 slat (nsec): min=4762, max=79512, avg=14582.23, stdev=5661.65 00:19:52.668 clat (usec): min=588, max=1825, avg=840.66, stdev=69.67 00:19:52.668 lat (usec): min=594, max=1840, avg=855.24, stdev=70.82 00:19:52.668 clat percentiles (usec): 00:19:52.668 | 1.00th=[ 676], 5.00th=[ 725], 10.00th=[ 758], 20.00th=[ 791], 00:19:52.668 | 30.00th=[ 807], 40.00th=[ 824], 50.00th=[ 840], 60.00th=[ 857], 00:19:52.668 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 930], 95.00th=[ 955], 00:19:52.668 | 99.00th=[ 1012], 99.50th=[ 1037], 99.90th=[ 1074], 99.95th=[ 1090], 00:19:52.668 | 99.99th=[ 1319] 00:19:52.668 bw ( KiB/s): min=17600, max=19488, per=49.86%, avg=18133.89, stdev=461.64, samples=19 00:19:52.668 iops : min= 4400, max= 4872, avg=4533.47, stdev=115.41, samples=19 00:19:52.668 lat (usec) : 750=8.59%, 1000=89.97% 00:19:52.668 lat (msec) : 2=1.44% 00:19:52.668 cpu : usr=89.65%, sys=8.90%, ctx=19, majf=0, minf=0 00:19:52.668 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.668 issued rwts: total=45464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.668 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:52.668 filename1: (groupid=0, jobs=1): err= 0: pid=86255: Fri Jul 12 06:44:32 2024 00:19:52.668 read: IOPS=4545, BW=17.8MiB/s (18.6MB/s)(178MiB/10001msec) 00:19:52.668 slat (nsec): min=6314, max=75294, avg=14687.67, stdev=5694.00 00:19:52.668 clat (usec): min=427, max=1839, avg=839.33, stdev=65.96 00:19:52.668 lat (usec): min=434, max=1851, avg=854.01, stdev=66.85 00:19:52.668 clat percentiles (usec): 00:19:52.668 | 1.00th=[ 676], 5.00th=[ 734], 10.00th=[ 766], 20.00th=[ 791], 00:19:52.668 | 30.00th=[ 807], 40.00th=[ 824], 50.00th=[ 840], 60.00th=[ 857], 00:19:52.668 | 70.00th=[ 873], 80.00th=[ 889], 90.00th=[ 922], 95.00th=[ 947], 00:19:52.668 | 99.00th=[ 1004], 99.50th=[ 1020], 99.90th=[ 1074], 99.95th=[ 1090], 00:19:52.668 | 99.99th=[ 1434] 00:19:52.668 bw ( KiB/s): min=17600, max=19488, per=49.86%, avg=18132.21, stdev=463.26, samples=19 00:19:52.668 iops : min= 4400, max= 4872, avg=4533.05, stdev=115.81, samples=19 00:19:52.668 lat (usec) : 500=0.01%, 750=6.71%, 1000=92.13% 00:19:52.668 lat (msec) : 2=1.15% 00:19:52.668 cpu : usr=89.76%, sys=8.75%, ctx=9, majf=0, minf=0 00:19:52.668 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.668 issued rwts: total=45464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.668 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:52.668 00:19:52.668 Run status group 0 (all jobs): 00:19:52.668 READ: bw=35.5MiB/s (37.2MB/s), 17.8MiB/s-17.8MiB/s (18.6MB/s-18.6MB/s), io=355MiB (372MB), run=10001-10001msec 00:19:52.927 06:44:32 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:52.927 06:44:32 -- target/dif.sh@43 -- # local sub 00:19:52.927 06:44:32 -- target/dif.sh@45 -- # for sub in "$@" 00:19:52.927 06:44:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:52.927 06:44:32 -- target/dif.sh@36 -- # local sub_id=0 00:19:52.927 06:44:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:52.927 06:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.927 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.927 06:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.927 06:44:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:52.927 06:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.927 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.927 06:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.927 06:44:32 -- target/dif.sh@45 -- # for sub in "$@" 00:19:52.927 06:44:32 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:52.927 06:44:32 -- target/dif.sh@36 -- # local sub_id=1 00:19:52.927 06:44:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.927 06:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.927 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.927 06:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.927 06:44:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:52.927 06:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.927 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.927 06:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.927 00:19:52.927 real 0m10.986s 00:19:52.927 user 0m18.580s 00:19:52.927 sys 0m2.002s 00:19:52.927 06:44:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.927 ************************************ 00:19:52.927 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.927 END TEST fio_dif_1_multi_subsystems 00:19:52.927 ************************************ 00:19:52.927 06:44:32 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:52.927 06:44:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:52.927 06:44:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:52.927 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.927 ************************************ 00:19:52.927 START TEST fio_dif_rand_params 00:19:52.927 ************************************ 00:19:52.927 06:44:32 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:19:52.927 06:44:32 -- target/dif.sh@100 -- # local NULL_DIF 00:19:52.927 06:44:32 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:52.927 06:44:32 -- target/dif.sh@103 -- # NULL_DIF=3 00:19:52.927 06:44:32 -- target/dif.sh@103 -- # bs=128k 00:19:52.927 06:44:32 -- target/dif.sh@103 -- # numjobs=3 00:19:52.927 06:44:32 -- target/dif.sh@103 -- # iodepth=3 00:19:52.927 06:44:32 -- target/dif.sh@103 -- # runtime=5 00:19:52.927 06:44:32 -- target/dif.sh@105 -- # create_subsystems 0 00:19:52.927 06:44:32 -- target/dif.sh@28 -- # local sub 00:19:52.927 06:44:32 -- target/dif.sh@30 -- # for sub in "$@" 00:19:52.927 06:44:32 -- target/dif.sh@31 -- # create_subsystem 0 00:19:52.927 06:44:32 -- target/dif.sh@18 -- # local sub_id=0 00:19:52.927 06:44:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:52.927 06:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.927 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.927 bdev_null0 00:19:52.927 06:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.927 06:44:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:52.927 06:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.927 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.927 06:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.927 06:44:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:52.927 06:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.927 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.927 06:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.928 06:44:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:52.928 06:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.928 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.928 [2024-07-12 06:44:32.781937] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.928 06:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.928 06:44:32 -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:52.928 06:44:32 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:52.928 06:44:32 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:52.928 06:44:32 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.928 06:44:32 -- nvmf/common.sh@520 -- # config=() 00:19:52.928 06:44:32 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.928 06:44:32 -- nvmf/common.sh@520 -- # local subsystem config 00:19:52.928 06:44:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:52.928 06:44:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:52.928 06:44:32 -- target/dif.sh@82 -- # gen_fio_conf 00:19:52.928 06:44:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:52.928 06:44:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:52.928 { 00:19:52.928 "params": { 00:19:52.928 "name": "Nvme$subsystem", 00:19:52.928 "trtype": "$TEST_TRANSPORT", 00:19:52.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:52.928 "adrfam": "ipv4", 00:19:52.928 "trsvcid": "$NVMF_PORT", 00:19:52.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:52.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:52.928 "hdgst": ${hdgst:-false}, 00:19:52.928 "ddgst": ${ddgst:-false} 00:19:52.928 }, 00:19:52.928 "method": "bdev_nvme_attach_controller" 00:19:52.928 } 00:19:52.928 EOF 00:19:52.928 )") 00:19:52.928 06:44:32 -- target/dif.sh@54 -- # local file 00:19:52.928 06:44:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:52.928 06:44:32 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.928 06:44:32 -- target/dif.sh@56 -- # cat 00:19:52.928 06:44:32 -- common/autotest_common.sh@1320 -- # shift 00:19:52.928 06:44:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:52.928 06:44:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.928 06:44:32 -- nvmf/common.sh@542 -- # cat 00:19:52.928 06:44:32 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.928 06:44:32 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:52.928 06:44:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:52.928 06:44:32 -- target/dif.sh@72 -- # (( file <= files )) 00:19:52.928 06:44:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:52.928 06:44:32 -- nvmf/common.sh@544 -- # jq . 00:19:52.928 06:44:32 -- nvmf/common.sh@545 -- # IFS=, 00:19:52.928 06:44:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:52.928 "params": { 00:19:52.928 "name": "Nvme0", 00:19:52.928 "trtype": "tcp", 00:19:52.928 "traddr": "10.0.0.2", 00:19:52.928 "adrfam": "ipv4", 00:19:52.928 "trsvcid": "4420", 00:19:52.928 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:52.928 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:52.928 "hdgst": false, 00:19:52.928 "ddgst": false 00:19:52.928 }, 00:19:52.928 "method": "bdev_nvme_attach_controller" 00:19:52.928 }' 00:19:52.928 06:44:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:52.928 06:44:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:52.928 06:44:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.928 06:44:32 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.928 06:44:32 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:52.928 06:44:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:52.928 06:44:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:52.928 06:44:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:52.928 06:44:32 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:52.928 06:44:32 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:53.187 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:53.187 ... 00:19:53.187 fio-3.35 00:19:53.187 Starting 3 threads 00:19:53.444 [2024-07-12 06:44:33.290249] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:53.444 [2024-07-12 06:44:33.290337] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:58.710 00:19:58.710 filename0: (groupid=0, jobs=1): err= 0: pid=86408: Fri Jul 12 06:44:38 2024 00:19:58.710 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(149MiB/5008msec) 00:19:58.710 slat (nsec): min=7835, max=68200, avg=15903.89, stdev=6413.76 00:19:58.710 clat (usec): min=11406, max=14803, avg=12547.60, stdev=552.51 00:19:58.710 lat (usec): min=11420, max=14871, avg=12563.50, stdev=553.47 00:19:58.710 clat percentiles (usec): 00:19:58.710 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11600], 20.00th=[11994], 00:19:58.710 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:19:58.710 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13304], 00:19:58.710 | 99.00th=[13566], 99.50th=[13698], 99.90th=[14746], 99.95th=[14746], 00:19:58.710 | 99.99th=[14746] 00:19:58.710 bw ( KiB/s): min=29952, max=31488, per=33.30%, avg=30489.60, stdev=632.27, samples=10 00:19:58.710 iops : min= 234, max= 246, avg=238.20, stdev= 4.94, samples=10 00:19:58.710 lat (msec) : 20=100.00% 00:19:58.710 cpu : usr=90.47%, sys=9.01%, ctx=5, majf=0, minf=0 00:19:58.710 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:58.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.710 issued rwts: total=1194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.710 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:58.710 filename0: (groupid=0, jobs=1): err= 0: pid=86409: Fri Jul 12 06:44:38 2024 00:19:58.710 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(149MiB/5007msec) 00:19:58.710 slat (nsec): min=7895, max=56738, avg=17041.99, stdev=6182.86 00:19:58.710 clat (usec): min=11409, max=13766, avg=12541.15, stdev=542.90 00:19:58.710 lat (usec): min=11423, max=13783, avg=12558.20, stdev=543.76 00:19:58.710 clat percentiles (usec): 00:19:58.710 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11731], 20.00th=[11994], 00:19:58.710 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:19:58.710 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13304], 00:19:58.710 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13698], 99.95th=[13829], 00:19:58.710 | 99.99th=[13829] 00:19:58.710 bw ( KiB/s): min=29952, max=31488, per=33.31%, avg=30495.60, stdev=626.87, samples=10 00:19:58.710 iops : min= 234, max= 246, avg=238.20, stdev= 4.94, samples=10 00:19:58.710 lat (msec) : 20=100.00% 00:19:58.710 cpu : usr=91.35%, sys=8.09%, ctx=9, majf=0, minf=9 00:19:58.710 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:58.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.710 issued rwts: total=1194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.710 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:58.710 filename0: (groupid=0, jobs=1): err= 0: pid=86410: Fri Jul 12 06:44:38 2024 00:19:58.710 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(149MiB/5005msec) 00:19:58.710 slat (usec): min=6, max=114, avg=17.16, stdev= 7.09 00:19:58.710 clat (usec): min=11435, max=13769, avg=12536.72, stdev=541.23 00:19:58.710 lat (usec): min=11476, max=13785, avg=12553.89, stdev=541.73 00:19:58.710 clat percentiles (usec): 00:19:58.710 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11731], 20.00th=[11863], 00:19:58.710 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:19:58.710 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13304], 00:19:58.710 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13698], 99.95th=[13829], 00:19:58.710 | 99.99th=[13829] 00:19:58.710 bw ( KiB/s): min=29184, max=31488, per=33.37%, avg=30549.33, stdev=746.36, samples=9 00:19:58.710 iops : min= 228, max= 246, avg=238.67, stdev= 5.83, samples=9 00:19:58.710 lat (msec) : 20=100.00% 00:19:58.710 cpu : usr=91.35%, sys=7.85%, ctx=55, majf=0, minf=9 00:19:58.710 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:58.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.710 issued rwts: total=1194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.710 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:58.710 00:19:58.710 Run status group 0 (all jobs): 00:19:58.710 READ: bw=89.4MiB/s (93.7MB/s), 29.8MiB/s-29.8MiB/s (31.2MB/s-31.3MB/s), io=448MiB (469MB), run=5005-5008msec 00:19:58.710 06:44:38 -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:58.710 06:44:38 -- target/dif.sh@43 -- # local sub 00:19:58.710 06:44:38 -- target/dif.sh@45 -- # for sub in "$@" 00:19:58.710 06:44:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:58.710 06:44:38 -- target/dif.sh@36 -- # local sub_id=0 00:19:58.710 06:44:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:58.710 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.710 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.710 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.710 06:44:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:58.710 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.710 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.710 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.710 06:44:38 -- target/dif.sh@109 -- # NULL_DIF=2 00:19:58.710 06:44:38 -- target/dif.sh@109 -- # bs=4k 00:19:58.710 06:44:38 -- target/dif.sh@109 -- # numjobs=8 00:19:58.710 06:44:38 -- target/dif.sh@109 -- # iodepth=16 00:19:58.710 06:44:38 -- target/dif.sh@109 -- # runtime= 00:19:58.710 06:44:38 -- target/dif.sh@109 -- # files=2 00:19:58.710 06:44:38 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:58.710 06:44:38 -- target/dif.sh@28 -- # local sub 00:19:58.710 06:44:38 -- target/dif.sh@30 -- # for sub in "$@" 00:19:58.711 06:44:38 -- target/dif.sh@31 -- # create_subsystem 0 00:19:58.711 06:44:38 -- target/dif.sh@18 -- # local sub_id=0 00:19:58.711 06:44:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:58.711 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.711 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.711 bdev_null0 00:19:58.711 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.711 06:44:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:58.711 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.711 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.970 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.970 06:44:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:58.970 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.970 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.970 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.970 06:44:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:58.970 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.970 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.970 [2024-07-12 06:44:38.647060] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.970 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.970 06:44:38 -- target/dif.sh@30 -- # for sub in "$@" 00:19:58.970 06:44:38 -- target/dif.sh@31 -- # create_subsystem 1 00:19:58.970 06:44:38 -- target/dif.sh@18 -- # local sub_id=1 00:19:58.970 06:44:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:58.970 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.970 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.970 bdev_null1 00:19:58.970 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.971 06:44:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:58.971 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.971 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.971 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.971 06:44:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:58.971 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.971 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.971 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.971 06:44:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.971 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.971 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.971 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.971 06:44:38 -- target/dif.sh@30 -- # for sub in "$@" 00:19:58.971 06:44:38 -- target/dif.sh@31 -- # create_subsystem 2 00:19:58.971 06:44:38 -- target/dif.sh@18 -- # local sub_id=2 00:19:58.971 06:44:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:58.971 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.971 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.971 bdev_null2 00:19:58.971 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.971 06:44:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:58.971 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.971 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.971 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.971 06:44:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:58.971 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.971 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.971 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.971 06:44:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:58.971 06:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.971 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:58.971 06:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.971 06:44:38 -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:58.971 06:44:38 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:58.971 06:44:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:58.971 06:44:38 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:58.971 06:44:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:58.971 06:44:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:58.971 06:44:38 -- target/dif.sh@82 -- # gen_fio_conf 00:19:58.971 06:44:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:58.971 06:44:38 -- nvmf/common.sh@520 -- # config=() 00:19:58.971 06:44:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:58.971 06:44:38 -- target/dif.sh@54 -- # local file 00:19:58.971 06:44:38 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.971 06:44:38 -- target/dif.sh@56 -- # cat 00:19:58.971 06:44:38 -- common/autotest_common.sh@1320 -- # shift 00:19:58.971 06:44:38 -- nvmf/common.sh@520 -- # local subsystem config 00:19:58.971 06:44:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:58.971 06:44:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.971 06:44:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:58.971 06:44:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:58.971 { 00:19:58.971 "params": { 00:19:58.971 "name": "Nvme$subsystem", 00:19:58.971 "trtype": "$TEST_TRANSPORT", 00:19:58.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.971 "adrfam": "ipv4", 00:19:58.971 "trsvcid": "$NVMF_PORT", 00:19:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.971 "hdgst": ${hdgst:-false}, 00:19:58.971 "ddgst": ${ddgst:-false} 00:19:58.971 }, 00:19:58.971 "method": "bdev_nvme_attach_controller" 00:19:58.971 } 00:19:58.971 EOF 00:19:58.971 )") 00:19:58.971 06:44:38 -- nvmf/common.sh@542 -- # cat 00:19:58.971 06:44:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.971 06:44:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:58.971 06:44:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:58.971 06:44:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:58.971 06:44:38 -- target/dif.sh@72 -- # (( file <= files )) 00:19:58.971 06:44:38 -- target/dif.sh@73 -- # cat 00:19:58.971 06:44:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:58.971 06:44:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:58.971 { 00:19:58.971 "params": { 00:19:58.971 "name": "Nvme$subsystem", 00:19:58.971 "trtype": "$TEST_TRANSPORT", 00:19:58.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.971 "adrfam": "ipv4", 00:19:58.971 "trsvcid": "$NVMF_PORT", 00:19:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.971 "hdgst": ${hdgst:-false}, 00:19:58.971 "ddgst": ${ddgst:-false} 00:19:58.971 }, 00:19:58.971 "method": "bdev_nvme_attach_controller" 00:19:58.971 } 00:19:58.971 EOF 00:19:58.971 )") 00:19:58.971 06:44:38 -- nvmf/common.sh@542 -- # cat 00:19:58.971 06:44:38 -- target/dif.sh@72 -- # (( file++ )) 00:19:58.971 06:44:38 -- target/dif.sh@72 -- # (( file <= files )) 00:19:58.971 06:44:38 -- target/dif.sh@73 -- # cat 00:19:58.971 06:44:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:58.971 06:44:38 -- target/dif.sh@72 -- # (( file++ )) 00:19:58.971 06:44:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:58.971 { 00:19:58.971 "params": { 00:19:58.971 "name": "Nvme$subsystem", 00:19:58.971 "trtype": "$TEST_TRANSPORT", 00:19:58.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.971 "adrfam": "ipv4", 00:19:58.971 "trsvcid": "$NVMF_PORT", 00:19:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.971 "hdgst": ${hdgst:-false}, 00:19:58.971 "ddgst": ${ddgst:-false} 00:19:58.971 }, 00:19:58.971 "method": "bdev_nvme_attach_controller" 00:19:58.971 } 00:19:58.971 EOF 00:19:58.971 )") 00:19:58.971 06:44:38 -- target/dif.sh@72 -- # (( file <= files )) 00:19:58.971 06:44:38 -- nvmf/common.sh@542 -- # cat 00:19:58.971 06:44:38 -- nvmf/common.sh@544 -- # jq . 00:19:58.971 06:44:38 -- nvmf/common.sh@545 -- # IFS=, 00:19:58.971 06:44:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:58.971 "params": { 00:19:58.971 "name": "Nvme0", 00:19:58.971 "trtype": "tcp", 00:19:58.971 "traddr": "10.0.0.2", 00:19:58.971 "adrfam": "ipv4", 00:19:58.971 "trsvcid": "4420", 00:19:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:58.971 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:58.971 "hdgst": false, 00:19:58.971 "ddgst": false 00:19:58.971 }, 00:19:58.971 "method": "bdev_nvme_attach_controller" 00:19:58.971 },{ 00:19:58.971 "params": { 00:19:58.971 "name": "Nvme1", 00:19:58.971 "trtype": "tcp", 00:19:58.971 "traddr": "10.0.0.2", 00:19:58.971 "adrfam": "ipv4", 00:19:58.971 "trsvcid": "4420", 00:19:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.971 "hdgst": false, 00:19:58.971 "ddgst": false 00:19:58.971 }, 00:19:58.971 "method": "bdev_nvme_attach_controller" 00:19:58.971 },{ 00:19:58.971 "params": { 00:19:58.971 "name": "Nvme2", 00:19:58.971 "trtype": "tcp", 00:19:58.971 "traddr": "10.0.0.2", 00:19:58.971 "adrfam": "ipv4", 00:19:58.971 "trsvcid": "4420", 00:19:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:58.971 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:58.971 "hdgst": false, 00:19:58.971 "ddgst": false 00:19:58.971 }, 00:19:58.971 "method": "bdev_nvme_attach_controller" 00:19:58.971 }' 00:19:58.971 06:44:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:58.971 06:44:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:58.971 06:44:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.971 06:44:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:58.971 06:44:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.971 06:44:38 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:58.971 06:44:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:58.971 06:44:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:58.971 06:44:38 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:58.971 06:44:38 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:59.230 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:59.230 ... 00:19:59.230 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:59.230 ... 00:19:59.230 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:59.230 ... 00:19:59.230 fio-3.35 00:19:59.230 Starting 24 threads 00:19:59.490 [2024-07-12 06:44:39.379105] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:59.490 [2024-07-12 06:44:39.379189] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:11.709 00:20:11.709 filename0: (groupid=0, jobs=1): err= 0: pid=86513: Fri Jul 12 06:44:49 2024 00:20:11.709 read: IOPS=190, BW=763KiB/s (781kB/s)(7648KiB/10029msec) 00:20:11.709 slat (usec): min=7, max=8022, avg=17.97, stdev=183.20 00:20:11.709 clat (msec): min=35, max=155, avg=83.79, stdev=22.61 00:20:11.709 lat (msec): min=35, max=155, avg=83.80, stdev=22.61 00:20:11.709 clat percentiles (msec): 00:20:11.709 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 62], 00:20:11.709 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 96], 00:20:11.709 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 110], 95.00th=[ 121], 00:20:11.709 | 99.00th=[ 134], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:20:11.709 | 99.99th=[ 157] 00:20:11.709 bw ( KiB/s): min= 544, max= 1010, per=3.97%, avg=758.55, stdev=144.90, samples=20 00:20:11.709 iops : min= 136, max= 252, avg=189.60, stdev=36.18, samples=20 00:20:11.709 lat (msec) : 50=6.28%, 100=65.17%, 250=28.56% 00:20:11.709 cpu : usr=31.18%, sys=1.97%, ctx=890, majf=0, minf=9 00:20:11.709 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=79.8%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:11.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.709 complete : 0=0.0%, 4=88.6%, 8=10.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.709 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.709 filename0: (groupid=0, jobs=1): err= 0: pid=86514: Fri Jul 12 06:44:49 2024 00:20:11.709 read: IOPS=196, BW=784KiB/s (803kB/s)(7868KiB/10035msec) 00:20:11.709 slat (usec): min=4, max=4022, avg=16.40, stdev=90.49 00:20:11.709 clat (msec): min=13, max=155, avg=81.51, stdev=23.87 00:20:11.709 lat (msec): min=13, max=155, avg=81.52, stdev=23.87 00:20:11.709 clat percentiles (msec): 00:20:11.709 | 1.00th=[ 26], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 62], 00:20:11.709 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 91], 00:20:11.709 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 110], 95.00th=[ 118], 00:20:11.709 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 157], 00:20:11.709 | 99.99th=[ 157] 00:20:11.709 bw ( KiB/s): min= 608, max= 1168, per=4.08%, avg=780.15, stdev=169.01, samples=20 00:20:11.709 iops : min= 152, max= 292, avg=195.00, stdev=42.24, samples=20 00:20:11.709 lat (msec) : 20=0.92%, 50=11.85%, 100=60.85%, 250=26.39% 00:20:11.709 cpu : usr=34.32%, sys=1.76%, ctx=1326, majf=0, minf=9 00:20:11.709 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.0%, 16=17.0%, 32=0.0%, >=64=0.0% 00:20:11.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.709 complete : 0=0.0%, 4=88.1%, 8=11.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.709 issued rwts: total=1967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.709 filename0: (groupid=0, jobs=1): err= 0: pid=86515: Fri Jul 12 06:44:49 2024 00:20:11.709 read: IOPS=202, BW=810KiB/s (829kB/s)(8136KiB/10050msec) 00:20:11.709 slat (usec): min=4, max=8027, avg=26.03, stdev=307.54 00:20:11.709 clat (msec): min=3, max=145, avg=78.85, stdev=24.29 00:20:11.709 lat (msec): min=3, max=145, avg=78.88, stdev=24.29 00:20:11.709 clat percentiles (msec): 00:20:11.709 | 1.00th=[ 13], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 61], 00:20:11.709 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 85], 00:20:11.709 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 109], 95.00th=[ 111], 00:20:11.709 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:20:11.709 | 99.99th=[ 146] 00:20:11.709 bw ( KiB/s): min= 632, max= 1502, per=4.22%, avg=806.70, stdev=203.75, samples=20 00:20:11.709 iops : min= 158, max= 375, avg=201.65, stdev=50.85, samples=20 00:20:11.709 lat (msec) : 4=0.34%, 10=0.54%, 20=1.57%, 50=9.98%, 100=64.16% 00:20:11.709 lat (msec) : 250=23.40% 00:20:11.709 cpu : usr=31.51%, sys=1.71%, ctx=855, majf=0, minf=9 00:20:11.709 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.2%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:11.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.709 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.709 issued rwts: total=2034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.709 filename0: (groupid=0, jobs=1): err= 0: pid=86516: Fri Jul 12 06:44:49 2024 00:20:11.709 read: IOPS=199, BW=797KiB/s (816kB/s)(7988KiB/10024msec) 00:20:11.709 slat (usec): min=4, max=8030, avg=20.63, stdev=197.44 00:20:11.709 clat (msec): min=27, max=131, avg=80.17, stdev=22.16 00:20:11.709 lat (msec): min=27, max=131, avg=80.19, stdev=22.16 00:20:11.709 clat percentiles (msec): 00:20:11.709 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 62], 00:20:11.709 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:20:11.709 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 115], 00:20:11.709 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:20:11.709 | 99.99th=[ 132] 00:20:11.709 bw ( KiB/s): min= 624, max= 1072, per=4.15%, avg=792.40, stdev=132.89, samples=20 00:20:11.709 iops : min= 156, max= 268, avg=198.10, stdev=33.22, samples=20 00:20:11.709 lat (msec) : 50=11.92%, 100=62.89%, 250=25.19% 00:20:11.709 cpu : usr=37.51%, sys=2.00%, ctx=1094, majf=0, minf=9 00:20:11.709 IO depths : 1=0.1%, 2=1.1%, 4=4.0%, 8=79.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:11.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.709 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 issued rwts: total=1997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.710 filename0: (groupid=0, jobs=1): err= 0: pid=86517: Fri Jul 12 06:44:49 2024 00:20:11.710 read: IOPS=204, BW=817KiB/s (837kB/s)(8196KiB/10032msec) 00:20:11.710 slat (usec): min=4, max=8033, avg=19.83, stdev=177.21 00:20:11.710 clat (msec): min=23, max=141, avg=78.22, stdev=22.57 00:20:11.710 lat (msec): min=23, max=142, avg=78.24, stdev=22.57 00:20:11.710 clat percentiles (msec): 00:20:11.710 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:20:11.710 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 83], 00:20:11.710 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 108], 95.00th=[ 114], 00:20:11.710 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 136], 00:20:11.710 | 99.99th=[ 142] 00:20:11.710 bw ( KiB/s): min= 640, max= 1096, per=4.26%, avg=813.20, stdev=146.44, samples=20 00:20:11.710 iops : min= 160, max= 274, avg=203.30, stdev=36.61, samples=20 00:20:11.710 lat (msec) : 50=13.37%, 100=64.47%, 250=22.16% 00:20:11.710 cpu : usr=31.75%, sys=1.86%, ctx=932, majf=0, minf=9 00:20:11.710 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.710 filename0: (groupid=0, jobs=1): err= 0: pid=86518: Fri Jul 12 06:44:49 2024 00:20:11.710 read: IOPS=201, BW=807KiB/s (827kB/s)(8092KiB/10024msec) 00:20:11.710 slat (usec): min=3, max=8024, avg=25.17, stdev=235.67 00:20:11.710 clat (msec): min=28, max=143, avg=79.11, stdev=23.02 00:20:11.710 lat (msec): min=28, max=143, avg=79.14, stdev=23.01 00:20:11.710 clat percentiles (msec): 00:20:11.710 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 61], 00:20:11.710 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 81], 00:20:11.710 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 118], 00:20:11.710 | 99.00th=[ 128], 99.50th=[ 128], 99.90th=[ 138], 99.95th=[ 144], 00:20:11.710 | 99.99th=[ 144] 00:20:11.710 bw ( KiB/s): min= 528, max= 1104, per=4.20%, avg=802.85, stdev=156.91, samples=20 00:20:11.710 iops : min= 132, max= 276, avg=200.70, stdev=39.22, samples=20 00:20:11.710 lat (msec) : 50=11.96%, 100=62.93%, 250=25.11% 00:20:11.710 cpu : usr=43.40%, sys=2.40%, ctx=1288, majf=0, minf=9 00:20:11.710 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=80.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.710 filename0: (groupid=0, jobs=1): err= 0: pid=86519: Fri Jul 12 06:44:49 2024 00:20:11.710 read: IOPS=209, BW=839KiB/s (859kB/s)(8412KiB/10032msec) 00:20:11.710 slat (usec): min=4, max=5495, avg=29.89, stdev=244.94 00:20:11.710 clat (msec): min=22, max=135, avg=76.17, stdev=22.37 00:20:11.710 lat (msec): min=22, max=135, avg=76.20, stdev=22.38 00:20:11.710 clat percentiles (msec): 00:20:11.710 | 1.00th=[ 33], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 59], 00:20:11.710 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 78], 00:20:11.710 | 70.00th=[ 91], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 113], 00:20:11.710 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 129], 00:20:11.710 | 99.99th=[ 136] 00:20:11.710 bw ( KiB/s): min= 664, max= 1096, per=4.37%, avg=834.80, stdev=145.10, samples=20 00:20:11.710 iops : min= 166, max= 274, avg=208.70, stdev=36.28, samples=20 00:20:11.710 lat (msec) : 50=15.22%, 100=64.15%, 250=20.64% 00:20:11.710 cpu : usr=43.39%, sys=2.53%, ctx=1276, majf=0, minf=9 00:20:11.710 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 issued rwts: total=2103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.710 filename0: (groupid=0, jobs=1): err= 0: pid=86520: Fri Jul 12 06:44:49 2024 00:20:11.710 read: IOPS=183, BW=734KiB/s (752kB/s)(7364KiB/10030msec) 00:20:11.710 slat (usec): min=3, max=4026, avg=20.57, stdev=146.24 00:20:11.710 clat (msec): min=32, max=143, avg=87.02, stdev=19.04 00:20:11.710 lat (msec): min=32, max=143, avg=87.04, stdev=19.04 00:20:11.710 clat percentiles (msec): 00:20:11.710 | 1.00th=[ 51], 5.00th=[ 61], 10.00th=[ 66], 20.00th=[ 70], 00:20:11.710 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 94], 00:20:11.710 | 70.00th=[ 102], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 117], 00:20:11.710 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 144], 00:20:11.710 | 99.99th=[ 144] 00:20:11.710 bw ( KiB/s): min= 632, max= 896, per=3.82%, avg=730.00, stdev=74.95, samples=20 00:20:11.710 iops : min= 158, max= 224, avg=182.50, stdev=18.74, samples=20 00:20:11.710 lat (msec) : 50=0.33%, 100=68.12%, 250=31.56% 00:20:11.710 cpu : usr=42.05%, sys=2.33%, ctx=1375, majf=0, minf=9 00:20:11.710 IO depths : 1=0.1%, 2=3.7%, 4=14.7%, 8=67.6%, 16=13.9%, 32=0.0%, >=64=0.0% 00:20:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 complete : 0=0.0%, 4=91.2%, 8=5.5%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 issued rwts: total=1841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.710 filename1: (groupid=0, jobs=1): err= 0: pid=86521: Fri Jul 12 06:44:49 2024 00:20:11.710 read: IOPS=201, BW=805KiB/s (825kB/s)(8064KiB/10015msec) 00:20:11.710 slat (usec): min=4, max=4026, avg=18.55, stdev=126.42 00:20:11.710 clat (msec): min=14, max=124, avg=79.37, stdev=21.91 00:20:11.710 lat (msec): min=14, max=124, avg=79.39, stdev=21.91 00:20:11.710 clat percentiles (msec): 00:20:11.710 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:20:11.710 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 85], 00:20:11.710 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 108], 95.00th=[ 114], 00:20:11.710 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 126], 99.95th=[ 126], 00:20:11.710 | 99.99th=[ 126] 00:20:11.710 bw ( KiB/s): min= 640, max= 1128, per=4.19%, avg=800.05, stdev=140.82, samples=20 00:20:11.710 iops : min= 160, max= 282, avg=200.00, stdev=35.20, samples=20 00:20:11.710 lat (msec) : 20=0.30%, 50=9.72%, 100=67.76%, 250=22.22% 00:20:11.710 cpu : usr=38.91%, sys=2.16%, ctx=1240, majf=0, minf=9 00:20:11.710 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.710 filename1: (groupid=0, jobs=1): err= 0: pid=86522: Fri Jul 12 06:44:49 2024 00:20:11.710 read: IOPS=206, BW=825KiB/s (844kB/s)(8252KiB/10007msec) 00:20:11.710 slat (usec): min=4, max=8029, avg=30.53, stdev=352.49 00:20:11.710 clat (msec): min=8, max=144, avg=77.50, stdev=22.73 00:20:11.710 lat (msec): min=8, max=144, avg=77.53, stdev=22.74 00:20:11.710 clat percentiles (msec): 00:20:11.710 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:20:11.710 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:20:11.710 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 108], 95.00th=[ 111], 00:20:11.710 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 144], 00:20:11.710 | 99.99th=[ 144] 00:20:11.710 bw ( KiB/s): min= 640, max= 1128, per=4.30%, avg=821.21, stdev=147.97, samples=19 00:20:11.710 iops : min= 160, max= 282, avg=205.26, stdev=37.02, samples=19 00:20:11.710 lat (msec) : 10=0.29%, 50=13.43%, 100=65.44%, 250=20.84% 00:20:11.710 cpu : usr=31.45%, sys=1.77%, ctx=859, majf=0, minf=9 00:20:11.710 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 issued rwts: total=2063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.710 filename1: (groupid=0, jobs=1): err= 0: pid=86523: Fri Jul 12 06:44:49 2024 00:20:11.710 read: IOPS=192, BW=769KiB/s (787kB/s)(7688KiB/10003msec) 00:20:11.710 slat (usec): min=4, max=6022, avg=23.97, stdev=209.32 00:20:11.710 clat (msec): min=6, max=145, avg=83.15, stdev=21.38 00:20:11.710 lat (msec): min=6, max=145, avg=83.17, stdev=21.38 00:20:11.710 clat percentiles (msec): 00:20:11.710 | 1.00th=[ 35], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 66], 00:20:11.710 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 88], 00:20:11.710 | 70.00th=[ 97], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 116], 00:20:11.710 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 146], 99.95th=[ 146], 00:20:11.710 | 99.99th=[ 146] 00:20:11.710 bw ( KiB/s): min= 640, max= 1024, per=3.97%, avg=759.16, stdev=104.00, samples=19 00:20:11.710 iops : min= 160, max= 256, avg=189.79, stdev=26.00, samples=19 00:20:11.710 lat (msec) : 10=0.36%, 20=0.31%, 50=4.11%, 100=68.89%, 250=26.33% 00:20:11.710 cpu : usr=44.10%, sys=2.51%, ctx=1331, majf=0, minf=9 00:20:11.710 IO depths : 1=0.2%, 2=2.3%, 4=8.7%, 8=74.5%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 complete : 0=0.0%, 4=89.2%, 8=8.9%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 issued rwts: total=1922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.710 filename1: (groupid=0, jobs=1): err= 0: pid=86524: Fri Jul 12 06:44:49 2024 00:20:11.710 read: IOPS=207, BW=831KiB/s (851kB/s)(8360KiB/10057msec) 00:20:11.710 slat (nsec): min=5131, max=40000, avg=12989.90, stdev=4737.02 00:20:11.710 clat (usec): min=1486, max=148830, avg=76881.95, stdev=29344.54 00:20:11.710 lat (usec): min=1494, max=148840, avg=76894.94, stdev=29344.92 00:20:11.710 clat percentiles (usec): 00:20:11.710 | 1.00th=[ 1614], 5.00th=[ 4113], 10.00th=[ 44303], 20.00th=[ 58983], 00:20:11.710 | 30.00th=[ 64750], 40.00th=[ 71828], 50.00th=[ 73925], 60.00th=[ 85459], 00:20:11.710 | 70.00th=[ 96994], 80.00th=[104334], 90.00th=[109577], 95.00th=[112722], 00:20:11.710 | 99.00th=[135267], 99.50th=[141558], 99.90th=[143655], 99.95th=[143655], 00:20:11.710 | 99.99th=[147850] 00:20:11.710 bw ( KiB/s): min= 526, max= 2160, per=4.34%, avg=829.20, stdev=342.55, samples=20 00:20:11.710 iops : min= 131, max= 540, avg=207.25, stdev=85.67, samples=20 00:20:11.710 lat (msec) : 2=3.06%, 4=1.44%, 10=1.63%, 20=0.77%, 50=8.37% 00:20:11.710 lat (msec) : 100=58.52%, 250=26.22% 00:20:11.710 cpu : usr=42.84%, sys=2.27%, ctx=1330, majf=0, minf=9 00:20:11.710 IO depths : 1=0.3%, 2=1.9%, 4=6.5%, 8=76.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 complete : 0=0.0%, 4=89.1%, 8=9.5%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.710 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.711 filename1: (groupid=0, jobs=1): err= 0: pid=86525: Fri Jul 12 06:44:49 2024 00:20:11.711 read: IOPS=194, BW=778KiB/s (796kB/s)(7800KiB/10028msec) 00:20:11.711 slat (usec): min=4, max=8026, avg=19.08, stdev=183.95 00:20:11.711 clat (msec): min=23, max=143, avg=82.17, stdev=21.88 00:20:11.711 lat (msec): min=23, max=143, avg=82.19, stdev=21.88 00:20:11.711 clat percentiles (msec): 00:20:11.711 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 62], 00:20:11.711 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 90], 00:20:11.711 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 109], 95.00th=[ 117], 00:20:11.711 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:20:11.711 | 99.99th=[ 144] 00:20:11.711 bw ( KiB/s): min= 592, max= 1024, per=4.05%, avg=773.65, stdev=129.61, samples=20 00:20:11.711 iops : min= 148, max= 256, avg=193.40, stdev=32.39, samples=20 00:20:11.711 lat (msec) : 50=8.31%, 100=65.90%, 250=25.79% 00:20:11.711 cpu : usr=32.02%, sys=1.59%, ctx=950, majf=0, minf=9 00:20:11.711 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.3%, 16=17.0%, 32=0.0%, >=64=0.0% 00:20:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 complete : 0=0.0%, 4=87.9%, 8=12.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 issued rwts: total=1950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.711 filename1: (groupid=0, jobs=1): err= 0: pid=86526: Fri Jul 12 06:44:49 2024 00:20:11.711 read: IOPS=199, BW=799KiB/s (818kB/s)(8012KiB/10025msec) 00:20:11.711 slat (usec): min=8, max=4080, avg=18.40, stdev=104.97 00:20:11.711 clat (msec): min=23, max=143, avg=79.97, stdev=21.51 00:20:11.711 lat (msec): min=23, max=143, avg=79.99, stdev=21.52 00:20:11.711 clat percentiles (msec): 00:20:11.711 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 63], 00:20:11.711 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:20:11.711 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 111], 95.00th=[ 114], 00:20:11.711 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 130], 99.95th=[ 140], 00:20:11.711 | 99.99th=[ 144] 00:20:11.711 bw ( KiB/s): min= 640, max= 1040, per=4.17%, avg=796.05, stdev=127.31, samples=20 00:20:11.711 iops : min= 160, max= 260, avg=199.00, stdev=31.81, samples=20 00:20:11.711 lat (msec) : 50=9.24%, 100=67.15%, 250=23.61% 00:20:11.711 cpu : usr=41.35%, sys=2.52%, ctx=1343, majf=0, minf=9 00:20:11.711 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.1%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.711 filename1: (groupid=0, jobs=1): err= 0: pid=86527: Fri Jul 12 06:44:49 2024 00:20:11.711 read: IOPS=202, BW=810KiB/s (829kB/s)(8104KiB/10011msec) 00:20:11.711 slat (usec): min=8, max=8030, avg=26.84, stdev=308.18 00:20:11.711 clat (msec): min=13, max=159, avg=78.93, stdev=22.67 00:20:11.711 lat (msec): min=13, max=159, avg=78.96, stdev=22.68 00:20:11.711 clat percentiles (msec): 00:20:11.711 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:20:11.711 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 85], 00:20:11.711 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 112], 00:20:11.711 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 142], 99.95th=[ 144], 00:20:11.711 | 99.99th=[ 161] 00:20:11.711 bw ( KiB/s): min= 664, max= 1072, per=4.23%, avg=808.58, stdev=140.21, samples=19 00:20:11.711 iops : min= 166, max= 268, avg=202.11, stdev=35.09, samples=19 00:20:11.711 lat (msec) : 20=0.30%, 50=12.24%, 100=65.25%, 250=22.21% 00:20:11.711 cpu : usr=31.34%, sys=1.87%, ctx=881, majf=0, minf=9 00:20:11.711 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 issued rwts: total=2026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.711 filename1: (groupid=0, jobs=1): err= 0: pid=86528: Fri Jul 12 06:44:49 2024 00:20:11.711 read: IOPS=208, BW=833KiB/s (853kB/s)(8352KiB/10021msec) 00:20:11.711 slat (usec): min=4, max=8028, avg=23.68, stdev=214.80 00:20:11.711 clat (msec): min=23, max=141, avg=76.66, stdev=22.27 00:20:11.711 lat (msec): min=23, max=141, avg=76.68, stdev=22.26 00:20:11.711 clat percentiles (msec): 00:20:11.711 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 59], 00:20:11.711 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 00:20:11.711 | 70.00th=[ 92], 80.00th=[ 103], 90.00th=[ 108], 95.00th=[ 113], 00:20:11.711 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 129], 00:20:11.711 | 99.99th=[ 142] 00:20:11.711 bw ( KiB/s): min= 664, max= 1080, per=4.34%, avg=828.80, stdev=149.09, samples=20 00:20:11.711 iops : min= 166, max= 270, avg=207.20, stdev=37.27, samples=20 00:20:11.711 lat (msec) : 50=14.75%, 100=63.94%, 250=21.31% 00:20:11.711 cpu : usr=42.35%, sys=2.25%, ctx=1220, majf=0, minf=9 00:20:11.711 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.711 filename2: (groupid=0, jobs=1): err= 0: pid=86529: Fri Jul 12 06:44:49 2024 00:20:11.711 read: IOPS=193, BW=774KiB/s (793kB/s)(7768KiB/10033msec) 00:20:11.711 slat (usec): min=5, max=8036, avg=31.55, stdev=363.56 00:20:11.711 clat (msec): min=35, max=132, avg=82.40, stdev=20.88 00:20:11.711 lat (msec): min=35, max=132, avg=82.43, stdev=20.87 00:20:11.711 clat percentiles (msec): 00:20:11.711 | 1.00th=[ 39], 5.00th=[ 49], 10.00th=[ 60], 20.00th=[ 63], 00:20:11.711 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 87], 00:20:11.711 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 109], 95.00th=[ 117], 00:20:11.711 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 133], 00:20:11.711 | 99.99th=[ 133] 00:20:11.711 bw ( KiB/s): min= 640, max= 1008, per=4.05%, avg=773.25, stdev=123.01, samples=20 00:20:11.711 iops : min= 160, max= 252, avg=193.30, stdev=30.75, samples=20 00:20:11.711 lat (msec) : 50=5.87%, 100=70.03%, 250=24.10% 00:20:11.711 cpu : usr=31.49%, sys=1.73%, ctx=867, majf=0, minf=9 00:20:11.711 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=76.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 complete : 0=0.0%, 4=89.1%, 8=9.7%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 issued rwts: total=1942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.711 filename2: (groupid=0, jobs=1): err= 0: pid=86530: Fri Jul 12 06:44:49 2024 00:20:11.711 read: IOPS=203, BW=815KiB/s (834kB/s)(8164KiB/10021msec) 00:20:11.711 slat (usec): min=4, max=4025, avg=17.18, stdev=89.85 00:20:11.711 clat (msec): min=31, max=141, avg=78.43, stdev=22.39 00:20:11.711 lat (msec): min=31, max=141, avg=78.45, stdev=22.39 00:20:11.711 clat percentiles (msec): 00:20:11.711 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 61], 00:20:11.711 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 83], 00:20:11.711 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 110], 95.00th=[ 115], 00:20:11.711 | 99.00th=[ 122], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 142], 00:20:11.711 | 99.99th=[ 142] 00:20:11.711 bw ( KiB/s): min= 656, max= 1072, per=4.24%, avg=810.00, stdev=144.07, samples=20 00:20:11.711 iops : min= 164, max= 268, avg=202.50, stdev=36.02, samples=20 00:20:11.711 lat (msec) : 50=12.74%, 100=64.87%, 250=22.39% 00:20:11.711 cpu : usr=41.65%, sys=2.43%, ctx=1404, majf=0, minf=9 00:20:11.711 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 issued rwts: total=2041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.711 filename2: (groupid=0, jobs=1): err= 0: pid=86531: Fri Jul 12 06:44:49 2024 00:20:11.711 read: IOPS=209, BW=837KiB/s (857kB/s)(8380KiB/10016msec) 00:20:11.711 slat (usec): min=5, max=8025, avg=18.35, stdev=175.10 00:20:11.711 clat (msec): min=20, max=125, avg=76.39, stdev=21.99 00:20:11.711 lat (msec): min=20, max=125, avg=76.40, stdev=21.99 00:20:11.711 clat percentiles (msec): 00:20:11.711 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 61], 00:20:11.711 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 00:20:11.711 | 70.00th=[ 91], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 112], 00:20:11.711 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 126], 00:20:11.711 | 99.99th=[ 126] 00:20:11.711 bw ( KiB/s): min= 664, max= 1104, per=4.36%, avg=833.35, stdev=130.62, samples=20 00:20:11.711 iops : min= 166, max= 276, avg=208.30, stdev=32.69, samples=20 00:20:11.711 lat (msec) : 50=14.46%, 100=65.25%, 250=20.29% 00:20:11.711 cpu : usr=37.82%, sys=1.81%, ctx=1074, majf=0, minf=9 00:20:11.711 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.711 filename2: (groupid=0, jobs=1): err= 0: pid=86532: Fri Jul 12 06:44:49 2024 00:20:11.711 read: IOPS=182, BW=732KiB/s (749kB/s)(7340KiB/10033msec) 00:20:11.711 slat (usec): min=5, max=8032, avg=34.28, stdev=385.01 00:20:11.711 clat (msec): min=35, max=153, avg=87.26, stdev=19.89 00:20:11.711 lat (msec): min=35, max=153, avg=87.29, stdev=19.88 00:20:11.711 clat percentiles (msec): 00:20:11.711 | 1.00th=[ 50], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 70], 00:20:11.711 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 96], 00:20:11.711 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 121], 00:20:11.711 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 155], 00:20:11.711 | 99.99th=[ 155] 00:20:11.711 bw ( KiB/s): min= 624, max= 1008, per=3.81%, avg=727.65, stdev=89.73, samples=20 00:20:11.711 iops : min= 156, max= 252, avg=181.90, stdev=22.45, samples=20 00:20:11.711 lat (msec) : 50=1.20%, 100=66.70%, 250=32.10% 00:20:11.711 cpu : usr=32.22%, sys=1.89%, ctx=964, majf=0, minf=9 00:20:11.711 IO depths : 1=0.1%, 2=2.7%, 4=11.1%, 8=71.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 complete : 0=0.0%, 4=90.4%, 8=7.1%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.711 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.711 filename2: (groupid=0, jobs=1): err= 0: pid=86533: Fri Jul 12 06:44:49 2024 00:20:11.711 read: IOPS=199, BW=798KiB/s (817kB/s)(8004KiB/10030msec) 00:20:11.711 slat (usec): min=5, max=5023, avg=18.88, stdev=143.61 00:20:11.711 clat (msec): min=23, max=143, avg=80.04, stdev=21.69 00:20:11.712 lat (msec): min=23, max=143, avg=80.06, stdev=21.68 00:20:11.712 clat percentiles (msec): 00:20:11.712 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 64], 00:20:11.712 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 85], 00:20:11.712 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 112], 00:20:11.712 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 144], 00:20:11.712 | 99.99th=[ 144] 00:20:11.712 bw ( KiB/s): min= 656, max= 1064, per=4.17%, avg=796.45, stdev=134.70, samples=20 00:20:11.712 iops : min= 164, max= 266, avg=199.10, stdev=33.68, samples=20 00:20:11.712 lat (msec) : 50=10.09%, 100=66.82%, 250=23.09% 00:20:11.712 cpu : usr=40.85%, sys=2.44%, ctx=1286, majf=0, minf=9 00:20:11.712 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.0%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:11.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.712 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.712 issued rwts: total=2001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.712 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.712 filename2: (groupid=0, jobs=1): err= 0: pid=86534: Fri Jul 12 06:44:49 2024 00:20:11.712 read: IOPS=200, BW=801KiB/s (820kB/s)(8040KiB/10038msec) 00:20:11.712 slat (usec): min=7, max=8036, avg=29.14, stdev=322.15 00:20:11.712 clat (msec): min=13, max=143, avg=79.70, stdev=22.16 00:20:11.712 lat (msec): min=13, max=143, avg=79.73, stdev=22.16 00:20:11.712 clat percentiles (msec): 00:20:11.712 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:20:11.712 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:20:11.712 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 111], 00:20:11.712 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 144], 00:20:11.712 | 99.99th=[ 144] 00:20:11.712 bw ( KiB/s): min= 656, max= 1088, per=4.19%, avg=800.15, stdev=133.15, samples=20 00:20:11.712 iops : min= 164, max= 272, avg=200.00, stdev=33.30, samples=20 00:20:11.712 lat (msec) : 20=0.80%, 50=10.15%, 100=66.32%, 250=22.74% 00:20:11.712 cpu : usr=31.35%, sys=1.85%, ctx=904, majf=0, minf=9 00:20:11.712 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.0%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:11.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.712 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.712 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.712 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.712 filename2: (groupid=0, jobs=1): err= 0: pid=86535: Fri Jul 12 06:44:49 2024 00:20:11.712 read: IOPS=200, BW=803KiB/s (822kB/s)(8044KiB/10015msec) 00:20:11.712 slat (usec): min=4, max=8026, avg=30.36, stdev=304.38 00:20:11.712 clat (msec): min=24, max=144, avg=79.53, stdev=21.80 00:20:11.712 lat (msec): min=24, max=144, avg=79.56, stdev=21.80 00:20:11.712 clat percentiles (msec): 00:20:11.712 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 62], 00:20:11.712 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:20:11.712 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 113], 00:20:11.712 | 99.00th=[ 126], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 144], 00:20:11.712 | 99.99th=[ 144] 00:20:11.712 bw ( KiB/s): min= 640, max= 1072, per=4.19%, avg=800.40, stdev=129.61, samples=20 00:20:11.712 iops : min= 160, max= 268, avg=200.10, stdev=32.40, samples=20 00:20:11.712 lat (msec) : 50=11.29%, 100=66.43%, 250=22.28% 00:20:11.712 cpu : usr=37.45%, sys=2.24%, ctx=1026, majf=0, minf=9 00:20:11.712 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=80.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:11.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.712 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.712 issued rwts: total=2011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.712 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.712 filename2: (groupid=0, jobs=1): err= 0: pid=86536: Fri Jul 12 06:44:49 2024 00:20:11.712 read: IOPS=198, BW=796KiB/s (815kB/s)(7984KiB/10035msec) 00:20:11.712 slat (usec): min=6, max=4023, avg=16.02, stdev=89.88 00:20:11.712 clat (msec): min=10, max=155, avg=80.33, stdev=22.52 00:20:11.712 lat (msec): min=10, max=155, avg=80.34, stdev=22.52 00:20:11.712 clat percentiles (msec): 00:20:11.712 | 1.00th=[ 25], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 64], 00:20:11.712 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:20:11.712 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 113], 00:20:11.712 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 157], 00:20:11.712 | 99.99th=[ 157] 00:20:11.712 bw ( KiB/s): min= 656, max= 1120, per=4.14%, avg=791.75, stdev=146.81, samples=20 00:20:11.712 iops : min= 164, max= 280, avg=197.90, stdev=36.71, samples=20 00:20:11.712 lat (msec) : 20=0.80%, 50=9.72%, 100=62.93%, 250=26.55% 00:20:11.712 cpu : usr=41.54%, sys=2.47%, ctx=1294, majf=0, minf=9 00:20:11.712 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.1%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:11.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.712 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.712 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.712 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.712 00:20:11.712 Run status group 0 (all jobs): 00:20:11.712 READ: bw=18.6MiB/s (19.6MB/s), 732KiB/s-839KiB/s (749kB/s-859kB/s), io=188MiB (197MB), run=10003-10057msec 00:20:11.712 06:44:49 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:11.712 06:44:49 -- target/dif.sh@43 -- # local sub 00:20:11.712 06:44:49 -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.712 06:44:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:11.712 06:44:49 -- target/dif.sh@36 -- # local sub_id=0 00:20:11.712 06:44:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.712 06:44:49 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:11.712 06:44:49 -- target/dif.sh@36 -- # local sub_id=1 00:20:11.712 06:44:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.712 06:44:49 -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:11.712 06:44:49 -- target/dif.sh@36 -- # local sub_id=2 00:20:11.712 06:44:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@115 -- # NULL_DIF=1 00:20:11.712 06:44:49 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:11.712 06:44:49 -- target/dif.sh@115 -- # numjobs=2 00:20:11.712 06:44:49 -- target/dif.sh@115 -- # iodepth=8 00:20:11.712 06:44:49 -- target/dif.sh@115 -- # runtime=5 00:20:11.712 06:44:49 -- target/dif.sh@115 -- # files=1 00:20:11.712 06:44:49 -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:11.712 06:44:49 -- target/dif.sh@28 -- # local sub 00:20:11.712 06:44:49 -- target/dif.sh@30 -- # for sub in "$@" 00:20:11.712 06:44:49 -- target/dif.sh@31 -- # create_subsystem 0 00:20:11.712 06:44:49 -- target/dif.sh@18 -- # local sub_id=0 00:20:11.712 06:44:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 bdev_null0 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 [2024-07-12 06:44:49.827408] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@30 -- # for sub in "$@" 00:20:11.712 06:44:49 -- target/dif.sh@31 -- # create_subsystem 1 00:20:11.712 06:44:49 -- target/dif.sh@18 -- # local sub_id=1 00:20:11.712 06:44:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 bdev_null1 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.712 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.712 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.712 06:44:49 -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:11.712 06:44:49 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:11.712 06:44:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:11.712 06:44:49 -- nvmf/common.sh@520 -- # config=() 00:20:11.713 06:44:49 -- nvmf/common.sh@520 -- # local subsystem config 00:20:11.713 06:44:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:11.713 06:44:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.713 06:44:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:11.713 { 00:20:11.713 "params": { 00:20:11.713 "name": "Nvme$subsystem", 00:20:11.713 "trtype": "$TEST_TRANSPORT", 00:20:11.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.713 "adrfam": "ipv4", 00:20:11.713 "trsvcid": "$NVMF_PORT", 00:20:11.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.713 "hdgst": ${hdgst:-false}, 00:20:11.713 "ddgst": ${ddgst:-false} 00:20:11.713 }, 00:20:11.713 "method": "bdev_nvme_attach_controller" 00:20:11.713 } 00:20:11.713 EOF 00:20:11.713 )") 00:20:11.713 06:44:49 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.713 06:44:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:11.713 06:44:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:11.713 06:44:49 -- target/dif.sh@82 -- # gen_fio_conf 00:20:11.713 06:44:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:11.713 06:44:49 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.713 06:44:49 -- target/dif.sh@54 -- # local file 00:20:11.713 06:44:49 -- common/autotest_common.sh@1320 -- # shift 00:20:11.713 06:44:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:11.713 06:44:49 -- target/dif.sh@56 -- # cat 00:20:11.713 06:44:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:11.713 06:44:49 -- nvmf/common.sh@542 -- # cat 00:20:11.713 06:44:49 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.713 06:44:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:11.713 06:44:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:11.713 06:44:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:11.713 06:44:49 -- target/dif.sh@72 -- # (( file <= files )) 00:20:11.713 06:44:49 -- target/dif.sh@73 -- # cat 00:20:11.713 06:44:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:11.713 06:44:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:11.713 { 00:20:11.713 "params": { 00:20:11.713 "name": "Nvme$subsystem", 00:20:11.713 "trtype": "$TEST_TRANSPORT", 00:20:11.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.713 "adrfam": "ipv4", 00:20:11.713 "trsvcid": "$NVMF_PORT", 00:20:11.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.713 "hdgst": ${hdgst:-false}, 00:20:11.713 "ddgst": ${ddgst:-false} 00:20:11.713 }, 00:20:11.713 "method": "bdev_nvme_attach_controller" 00:20:11.713 } 00:20:11.713 EOF 00:20:11.713 )") 00:20:11.713 06:44:49 -- target/dif.sh@72 -- # (( file++ )) 00:20:11.713 06:44:49 -- nvmf/common.sh@542 -- # cat 00:20:11.713 06:44:49 -- target/dif.sh@72 -- # (( file <= files )) 00:20:11.713 06:44:49 -- nvmf/common.sh@544 -- # jq . 00:20:11.713 06:44:49 -- nvmf/common.sh@545 -- # IFS=, 00:20:11.713 06:44:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:11.713 "params": { 00:20:11.713 "name": "Nvme0", 00:20:11.713 "trtype": "tcp", 00:20:11.713 "traddr": "10.0.0.2", 00:20:11.713 "adrfam": "ipv4", 00:20:11.713 "trsvcid": "4420", 00:20:11.713 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:11.713 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:11.713 "hdgst": false, 00:20:11.713 "ddgst": false 00:20:11.713 }, 00:20:11.713 "method": "bdev_nvme_attach_controller" 00:20:11.713 },{ 00:20:11.713 "params": { 00:20:11.713 "name": "Nvme1", 00:20:11.713 "trtype": "tcp", 00:20:11.713 "traddr": "10.0.0.2", 00:20:11.713 "adrfam": "ipv4", 00:20:11.713 "trsvcid": "4420", 00:20:11.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.713 "hdgst": false, 00:20:11.713 "ddgst": false 00:20:11.713 }, 00:20:11.713 "method": "bdev_nvme_attach_controller" 00:20:11.713 }' 00:20:11.713 06:44:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:11.713 06:44:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:11.713 06:44:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:11.713 06:44:49 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:11.713 06:44:49 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.713 06:44:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:11.713 06:44:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:11.713 06:44:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:11.713 06:44:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:11.713 06:44:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.713 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:11.713 ... 00:20:11.713 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:11.713 ... 00:20:11.713 fio-3.35 00:20:11.713 Starting 4 threads 00:20:11.713 [2024-07-12 06:44:50.445186] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:11.713 [2024-07-12 06:44:50.445782] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:15.897 00:20:15.897 filename0: (groupid=0, jobs=1): err= 0: pid=86682: Fri Jul 12 06:44:55 2024 00:20:15.897 read: IOPS=2216, BW=17.3MiB/s (18.2MB/s)(86.6MiB/5002msec) 00:20:15.897 slat (nsec): min=3662, max=67832, avg=15862.14, stdev=4321.26 00:20:15.897 clat (usec): min=1656, max=6249, avg=3571.76, stdev=1125.81 00:20:15.897 lat (usec): min=1678, max=6263, avg=3587.63, stdev=1125.11 00:20:15.897 clat percentiles (usec): 00:20:15.897 | 1.00th=[ 1909], 5.00th=[ 1926], 10.00th=[ 2008], 20.00th=[ 2180], 00:20:15.897 | 30.00th=[ 2966], 40.00th=[ 3097], 50.00th=[ 3228], 60.00th=[ 4555], 00:20:15.897 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4817], 95.00th=[ 4883], 00:20:15.897 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5145], 99.95th=[ 5211], 00:20:15.897 | 99.99th=[ 5211] 00:20:15.897 bw ( KiB/s): min=17312, max=17952, per=26.87%, avg=17756.44, stdev=198.68, samples=9 00:20:15.897 iops : min= 2164, max= 2244, avg=2219.56, stdev=24.84, samples=9 00:20:15.897 lat (msec) : 2=9.82%, 4=44.86%, 10=45.31% 00:20:15.897 cpu : usr=91.66%, sys=7.32%, ctx=6, majf=0, minf=9 00:20:15.897 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.897 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.897 issued rwts: total=11087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.897 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:15.897 filename0: (groupid=0, jobs=1): err= 0: pid=86683: Fri Jul 12 06:44:55 2024 00:20:15.897 read: IOPS=2217, BW=17.3MiB/s (18.2MB/s)(86.7MiB/5003msec) 00:20:15.897 slat (nsec): min=6110, max=57391, avg=12217.05, stdev=4481.01 00:20:15.897 clat (usec): min=1448, max=6214, avg=3577.68, stdev=1117.59 00:20:15.897 lat (usec): min=1461, max=6228, avg=3589.90, stdev=1115.96 00:20:15.897 clat percentiles (usec): 00:20:15.897 | 1.00th=[ 2008], 5.00th=[ 2147], 10.00th=[ 2180], 20.00th=[ 2278], 00:20:15.897 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3130], 60.00th=[ 4490], 00:20:15.897 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 4883], 95.00th=[ 4948], 00:20:15.897 | 99.00th=[ 5080], 99.50th=[ 5080], 99.90th=[ 5211], 99.95th=[ 5211], 00:20:15.897 | 99.99th=[ 5276] 00:20:15.897 bw ( KiB/s): min=17376, max=17952, per=26.89%, avg=17772.44, stdev=179.56, samples=9 00:20:15.897 iops : min= 2172, max= 2244, avg=2221.56, stdev=22.44, samples=9 00:20:15.897 lat (msec) : 2=0.79%, 4=53.94%, 10=45.27% 00:20:15.897 cpu : usr=90.92%, sys=8.04%, ctx=9, majf=0, minf=9 00:20:15.897 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.897 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.897 issued rwts: total=11092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.897 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:15.897 filename1: (groupid=0, jobs=1): err= 0: pid=86684: Fri Jul 12 06:44:55 2024 00:20:15.897 read: IOPS=2216, BW=17.3MiB/s (18.2MB/s)(86.6MiB/5002msec) 00:20:15.897 slat (nsec): min=7738, max=65374, avg=15338.71, stdev=4060.15 00:20:15.897 clat (usec): min=1648, max=7090, avg=3574.16, stdev=1123.40 00:20:15.897 lat (usec): min=1663, max=7116, avg=3589.50, stdev=1123.75 00:20:15.897 clat percentiles (usec): 00:20:15.897 | 1.00th=[ 1909], 5.00th=[ 1942], 10.00th=[ 2008], 20.00th=[ 2180], 00:20:15.897 | 30.00th=[ 2966], 40.00th=[ 3097], 50.00th=[ 3228], 60.00th=[ 4555], 00:20:15.897 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4817], 95.00th=[ 4883], 00:20:15.897 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5145], 99.95th=[ 5211], 00:20:15.897 | 99.99th=[ 5211] 00:20:15.897 bw ( KiB/s): min=17312, max=17952, per=26.87%, avg=17756.44, stdev=198.68, samples=9 00:20:15.897 iops : min= 2164, max= 2244, avg=2219.56, stdev=24.84, samples=9 00:20:15.897 lat (msec) : 2=9.51%, 4=45.19%, 10=45.31% 00:20:15.897 cpu : usr=90.26%, sys=8.74%, ctx=5, majf=0, minf=9 00:20:15.897 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.897 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.897 issued rwts: total=11087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.897 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:15.898 filename1: (groupid=0, jobs=1): err= 0: pid=86685: Fri Jul 12 06:44:55 2024 00:20:15.898 read: IOPS=1612, BW=12.6MiB/s (13.2MB/s)(63.0MiB/5001msec) 00:20:15.898 slat (nsec): min=7490, max=49129, avg=13209.86, stdev=4801.66 00:20:15.898 clat (usec): min=1312, max=6452, avg=4905.28, stdev=174.39 00:20:15.898 lat (usec): min=1324, max=6479, avg=4918.49, stdev=174.72 00:20:15.898 clat percentiles (usec): 00:20:15.898 | 1.00th=[ 4752], 5.00th=[ 4752], 10.00th=[ 4752], 20.00th=[ 4817], 00:20:15.898 | 30.00th=[ 4817], 40.00th=[ 4883], 50.00th=[ 4883], 60.00th=[ 4948], 00:20:15.898 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5080], 95.00th=[ 5080], 00:20:15.898 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 6128], 99.95th=[ 6128], 00:20:15.898 | 99.99th=[ 6456] 00:20:15.898 bw ( KiB/s): min=12544, max=13056, per=19.52%, avg=12902.33, stdev=166.02, samples=9 00:20:15.898 iops : min= 1568, max= 1632, avg=1612.78, stdev=20.75, samples=9 00:20:15.898 lat (msec) : 2=0.10%, 4=0.10%, 10=99.80% 00:20:15.898 cpu : usr=91.56%, sys=7.68%, ctx=5, majf=0, minf=9 00:20:15.898 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.898 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.898 issued rwts: total=8064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.898 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:15.898 00:20:15.898 Run status group 0 (all jobs): 00:20:15.898 READ: bw=64.5MiB/s (67.7MB/s), 12.6MiB/s-17.3MiB/s (13.2MB/s-18.2MB/s), io=323MiB (339MB), run=5001-5003msec 00:20:15.898 06:44:55 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:15.898 06:44:55 -- target/dif.sh@43 -- # local sub 00:20:15.898 06:44:55 -- target/dif.sh@45 -- # for sub in "$@" 00:20:15.898 06:44:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:15.898 06:44:55 -- target/dif.sh@36 -- # local sub_id=0 00:20:15.898 06:44:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:15.898 06:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.898 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:20:15.898 06:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.898 06:44:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:15.898 06:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.898 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:20:15.898 06:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.898 06:44:55 -- target/dif.sh@45 -- # for sub in "$@" 00:20:15.898 06:44:55 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:15.898 06:44:55 -- target/dif.sh@36 -- # local sub_id=1 00:20:15.898 06:44:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.898 06:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.898 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:20:15.898 06:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.898 06:44:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:15.898 06:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.898 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:20:15.898 06:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:15.898 00:20:15.898 real 0m23.008s 00:20:15.898 user 2m3.505s 00:20:15.898 sys 0m8.564s 00:20:15.898 06:44:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.898 ************************************ 00:20:15.898 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:20:15.898 END TEST fio_dif_rand_params 00:20:15.898 ************************************ 00:20:15.898 06:44:55 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:15.898 06:44:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:15.898 06:44:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:15.898 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:20:15.898 ************************************ 00:20:15.898 START TEST fio_dif_digest 00:20:15.898 ************************************ 00:20:15.898 06:44:55 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:20:15.898 06:44:55 -- target/dif.sh@123 -- # local NULL_DIF 00:20:15.898 06:44:55 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:15.898 06:44:55 -- target/dif.sh@125 -- # local hdgst ddgst 00:20:15.898 06:44:55 -- target/dif.sh@127 -- # NULL_DIF=3 00:20:15.898 06:44:55 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:15.898 06:44:55 -- target/dif.sh@127 -- # numjobs=3 00:20:15.898 06:44:55 -- target/dif.sh@127 -- # iodepth=3 00:20:15.898 06:44:55 -- target/dif.sh@127 -- # runtime=10 00:20:15.898 06:44:55 -- target/dif.sh@128 -- # hdgst=true 00:20:15.898 06:44:55 -- target/dif.sh@128 -- # ddgst=true 00:20:15.898 06:44:55 -- target/dif.sh@130 -- # create_subsystems 0 00:20:15.898 06:44:55 -- target/dif.sh@28 -- # local sub 00:20:15.898 06:44:55 -- target/dif.sh@30 -- # for sub in "$@" 00:20:15.898 06:44:55 -- target/dif.sh@31 -- # create_subsystem 0 00:20:15.898 06:44:55 -- target/dif.sh@18 -- # local sub_id=0 00:20:15.898 06:44:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:15.898 06:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:15.898 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:20:16.156 bdev_null0 00:20:16.156 06:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:16.156 06:44:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:16.156 06:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:16.156 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:20:16.156 06:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:16.156 06:44:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:16.156 06:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:16.156 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:20:16.156 06:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:16.156 06:44:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:16.156 06:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:16.156 06:44:55 -- common/autotest_common.sh@10 -- # set +x 00:20:16.156 [2024-07-12 06:44:55.846579] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.156 06:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:16.156 06:44:55 -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:16.156 06:44:55 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:16.156 06:44:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:16.156 06:44:55 -- nvmf/common.sh@520 -- # config=() 00:20:16.156 06:44:55 -- nvmf/common.sh@520 -- # local subsystem config 00:20:16.156 06:44:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:16.156 06:44:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:16.156 06:44:55 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:16.156 06:44:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:16.156 { 00:20:16.156 "params": { 00:20:16.156 "name": "Nvme$subsystem", 00:20:16.156 "trtype": "$TEST_TRANSPORT", 00:20:16.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.156 "adrfam": "ipv4", 00:20:16.156 "trsvcid": "$NVMF_PORT", 00:20:16.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.156 "hdgst": ${hdgst:-false}, 00:20:16.156 "ddgst": ${ddgst:-false} 00:20:16.156 }, 00:20:16.156 "method": "bdev_nvme_attach_controller" 00:20:16.156 } 00:20:16.156 EOF 00:20:16.156 )") 00:20:16.156 06:44:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:16.156 06:44:55 -- target/dif.sh@82 -- # gen_fio_conf 00:20:16.156 06:44:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:16.156 06:44:55 -- target/dif.sh@54 -- # local file 00:20:16.156 06:44:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:16.156 06:44:55 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:16.156 06:44:55 -- target/dif.sh@56 -- # cat 00:20:16.156 06:44:55 -- common/autotest_common.sh@1320 -- # shift 00:20:16.156 06:44:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:16.156 06:44:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:16.156 06:44:55 -- nvmf/common.sh@542 -- # cat 00:20:16.156 06:44:55 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:16.156 06:44:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:16.156 06:44:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:16.156 06:44:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:16.156 06:44:55 -- target/dif.sh@72 -- # (( file <= files )) 00:20:16.156 06:44:55 -- nvmf/common.sh@544 -- # jq . 00:20:16.156 06:44:55 -- nvmf/common.sh@545 -- # IFS=, 00:20:16.156 06:44:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:16.156 "params": { 00:20:16.156 "name": "Nvme0", 00:20:16.156 "trtype": "tcp", 00:20:16.156 "traddr": "10.0.0.2", 00:20:16.156 "adrfam": "ipv4", 00:20:16.156 "trsvcid": "4420", 00:20:16.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:16.156 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:16.156 "hdgst": true, 00:20:16.156 "ddgst": true 00:20:16.156 }, 00:20:16.156 "method": "bdev_nvme_attach_controller" 00:20:16.156 }' 00:20:16.156 06:44:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:16.156 06:44:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:16.156 06:44:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:16.156 06:44:55 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:16.156 06:44:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:16.156 06:44:55 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:16.156 06:44:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:16.156 06:44:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:16.156 06:44:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:16.156 06:44:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:16.156 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:16.156 ... 00:20:16.157 fio-3.35 00:20:16.157 Starting 3 threads 00:20:16.722 [2024-07-12 06:44:56.372581] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:16.722 [2024-07-12 06:44:56.372664] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:26.691 00:20:26.691 filename0: (groupid=0, jobs=1): err= 0: pid=86791: Fri Jul 12 06:45:06 2024 00:20:26.691 read: IOPS=217, BW=27.1MiB/s (28.5MB/s)(272MiB/10004msec) 00:20:26.691 slat (nsec): min=7798, max=49780, avg=15239.20, stdev=5634.08 00:20:26.691 clat (usec): min=10951, max=18200, avg=13781.66, stdev=655.79 00:20:26.691 lat (usec): min=10960, max=18223, avg=13796.90, stdev=654.61 00:20:26.691 clat percentiles (usec): 00:20:26.691 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13304], 00:20:26.691 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566], 00:20:26.691 | 70.00th=[13829], 80.00th=[14484], 90.00th=[14877], 95.00th=[15008], 00:20:26.691 | 99.00th=[15270], 99.50th=[15401], 99.90th=[18220], 99.95th=[18220], 00:20:26.691 | 99.99th=[18220] 00:20:26.691 bw ( KiB/s): min=25394, max=29184, per=33.32%, avg=27771.89, stdev=1087.91, samples=19 00:20:26.691 iops : min= 198, max= 228, avg=216.95, stdev= 8.55, samples=19 00:20:26.691 lat (msec) : 20=100.00% 00:20:26.691 cpu : usr=91.97%, sys=7.47%, ctx=21, majf=0, minf=9 00:20:26.691 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:26.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.692 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.692 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:26.692 filename0: (groupid=0, jobs=1): err= 0: pid=86792: Fri Jul 12 06:45:06 2024 00:20:26.692 read: IOPS=217, BW=27.1MiB/s (28.4MB/s)(272MiB/10007msec) 00:20:26.692 slat (usec): min=6, max=247, avg=14.98, stdev= 7.89 00:20:26.692 clat (usec): min=13033, max=17562, avg=13787.25, stdev=643.05 00:20:26.692 lat (usec): min=13042, max=17584, avg=13802.24, stdev=642.43 00:20:26.692 clat percentiles (usec): 00:20:26.692 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13304], 00:20:26.692 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566], 00:20:26.692 | 70.00th=[13829], 80.00th=[14484], 90.00th=[14877], 95.00th=[15008], 00:20:26.692 | 99.00th=[15270], 99.50th=[15401], 99.90th=[17433], 99.95th=[17433], 00:20:26.692 | 99.99th=[17433] 00:20:26.692 bw ( KiB/s): min=25344, max=29184, per=33.32%, avg=27769.26, stdev=1094.03, samples=19 00:20:26.692 iops : min= 198, max= 228, avg=216.95, stdev= 8.55, samples=19 00:20:26.692 lat (msec) : 20=100.00% 00:20:26.692 cpu : usr=91.39%, sys=7.80%, ctx=140, majf=0, minf=0 00:20:26.692 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:26.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.692 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.692 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:26.692 filename0: (groupid=0, jobs=1): err= 0: pid=86793: Fri Jul 12 06:45:06 2024 00:20:26.692 read: IOPS=217, BW=27.1MiB/s (28.5MB/s)(272MiB/10005msec) 00:20:26.692 slat (nsec): min=7726, max=48099, avg=15252.93, stdev=5500.44 00:20:26.692 clat (usec): min=13086, max=15548, avg=13783.78, stdev=629.99 00:20:26.692 lat (usec): min=13100, max=15573, avg=13799.03, stdev=629.21 00:20:26.692 clat percentiles (usec): 00:20:26.692 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13304], 00:20:26.692 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566], 00:20:26.692 | 70.00th=[13829], 80.00th=[14484], 90.00th=[14877], 95.00th=[15008], 00:20:26.692 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15533], 99.95th=[15533], 00:20:26.692 | 99.99th=[15533] 00:20:26.692 bw ( KiB/s): min=25344, max=29184, per=33.32%, avg=27769.26, stdev=1094.03, samples=19 00:20:26.692 iops : min= 198, max= 228, avg=216.95, stdev= 8.55, samples=19 00:20:26.692 lat (msec) : 20=100.00% 00:20:26.692 cpu : usr=91.88%, sys=7.50%, ctx=18, majf=0, minf=0 00:20:26.692 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:26.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.692 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.692 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:26.692 00:20:26.692 Run status group 0 (all jobs): 00:20:26.692 READ: bw=81.4MiB/s (85.3MB/s), 27.1MiB/s-27.1MiB/s (28.4MB/s-28.5MB/s), io=815MiB (854MB), run=10004-10007msec 00:20:26.952 06:45:06 -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:26.952 06:45:06 -- target/dif.sh@43 -- # local sub 00:20:26.952 06:45:06 -- target/dif.sh@45 -- # for sub in "$@" 00:20:26.952 06:45:06 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:26.952 06:45:06 -- target/dif.sh@36 -- # local sub_id=0 00:20:26.952 06:45:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:26.952 06:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.952 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:20:26.952 06:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.952 06:45:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:26.952 06:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.952 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:20:26.952 06:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.952 00:20:26.952 real 0m10.851s 00:20:26.952 user 0m28.068s 00:20:26.952 sys 0m2.502s 00:20:26.952 ************************************ 00:20:26.952 END TEST fio_dif_digest 00:20:26.952 06:45:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.952 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:20:26.952 ************************************ 00:20:26.952 06:45:06 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:26.952 06:45:06 -- target/dif.sh@147 -- # nvmftestfini 00:20:26.952 06:45:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:26.952 06:45:06 -- nvmf/common.sh@116 -- # sync 00:20:26.952 06:45:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:26.952 06:45:06 -- nvmf/common.sh@119 -- # set +e 00:20:26.952 06:45:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:26.952 06:45:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:26.952 rmmod nvme_tcp 00:20:26.952 rmmod nvme_fabrics 00:20:26.952 rmmod nvme_keyring 00:20:26.952 06:45:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:26.952 06:45:06 -- nvmf/common.sh@123 -- # set -e 00:20:26.952 06:45:06 -- nvmf/common.sh@124 -- # return 0 00:20:26.952 06:45:06 -- nvmf/common.sh@477 -- # '[' -n 86023 ']' 00:20:26.952 06:45:06 -- nvmf/common.sh@478 -- # killprocess 86023 00:20:26.952 06:45:06 -- common/autotest_common.sh@926 -- # '[' -z 86023 ']' 00:20:26.952 06:45:06 -- common/autotest_common.sh@930 -- # kill -0 86023 00:20:26.952 06:45:06 -- common/autotest_common.sh@931 -- # uname 00:20:26.952 06:45:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:26.952 06:45:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86023 00:20:26.952 06:45:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:26.952 killing process with pid 86023 00:20:26.952 06:45:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:26.952 06:45:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86023' 00:20:26.952 06:45:06 -- common/autotest_common.sh@945 -- # kill 86023 00:20:26.952 06:45:06 -- common/autotest_common.sh@950 -- # wait 86023 00:20:27.211 06:45:07 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:27.211 06:45:07 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:27.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:27.728 Waiting for block devices as requested 00:20:27.728 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:27.728 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:27.728 06:45:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:27.728 06:45:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:27.728 06:45:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.728 06:45:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:27.728 06:45:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.728 06:45:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:27.728 06:45:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.728 06:45:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:27.728 00:20:27.728 real 0m58.938s 00:20:27.728 user 3m46.131s 00:20:27.728 sys 0m19.551s 00:20:27.728 06:45:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.728 ************************************ 00:20:27.728 06:45:07 -- common/autotest_common.sh@10 -- # set +x 00:20:27.728 END TEST nvmf_dif 00:20:27.728 ************************************ 00:20:27.987 06:45:07 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:27.987 06:45:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:27.987 06:45:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:27.987 06:45:07 -- common/autotest_common.sh@10 -- # set +x 00:20:27.987 ************************************ 00:20:27.987 START TEST nvmf_abort_qd_sizes 00:20:27.987 ************************************ 00:20:27.987 06:45:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:27.987 * Looking for test storage... 00:20:27.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:27.987 06:45:07 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:27.987 06:45:07 -- nvmf/common.sh@7 -- # uname -s 00:20:27.987 06:45:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.987 06:45:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.987 06:45:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.987 06:45:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.987 06:45:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.987 06:45:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.987 06:45:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.987 06:45:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.987 06:45:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.987 06:45:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.987 06:45:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 00:20:27.987 06:45:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=b322988a-296a-4d08-987d-2f44d8098168 00:20:27.987 06:45:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.987 06:45:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.987 06:45:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:27.987 06:45:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:27.987 06:45:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.987 06:45:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.987 06:45:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.987 06:45:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.987 06:45:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.987 06:45:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.987 06:45:07 -- paths/export.sh@5 -- # export PATH 00:20:27.987 06:45:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.987 06:45:07 -- nvmf/common.sh@46 -- # : 0 00:20:27.987 06:45:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:27.987 06:45:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:27.987 06:45:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:27.987 06:45:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.987 06:45:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.987 06:45:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:27.987 06:45:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:27.987 06:45:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:27.987 06:45:07 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:20:27.987 06:45:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:27.987 06:45:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.987 06:45:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:27.987 06:45:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:27.987 06:45:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:27.987 06:45:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.987 06:45:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:27.987 06:45:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.987 06:45:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:27.987 06:45:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:27.987 06:45:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:27.987 06:45:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:27.987 06:45:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:27.987 06:45:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:27.987 06:45:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.987 06:45:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.987 06:45:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:27.987 06:45:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:27.987 06:45:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:27.987 06:45:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:27.987 06:45:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:27.987 06:45:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.987 06:45:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:27.987 06:45:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:27.987 06:45:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:27.987 06:45:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:27.987 06:45:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:27.987 06:45:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:27.987 Cannot find device "nvmf_tgt_br" 00:20:27.987 06:45:07 -- nvmf/common.sh@154 -- # true 00:20:27.987 06:45:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:27.987 Cannot find device "nvmf_tgt_br2" 00:20:27.987 06:45:07 -- nvmf/common.sh@155 -- # true 00:20:27.987 06:45:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:27.987 06:45:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:27.987 Cannot find device "nvmf_tgt_br" 00:20:27.987 06:45:07 -- nvmf/common.sh@157 -- # true 00:20:27.987 06:45:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:27.987 Cannot find device "nvmf_tgt_br2" 00:20:27.987 06:45:07 -- nvmf/common.sh@158 -- # true 00:20:27.987 06:45:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:27.988 06:45:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:28.246 06:45:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:28.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:28.246 06:45:07 -- nvmf/common.sh@161 -- # true 00:20:28.246 06:45:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:28.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:28.246 06:45:07 -- nvmf/common.sh@162 -- # true 00:20:28.246 06:45:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:28.246 06:45:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:28.246 06:45:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:28.246 06:45:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:28.246 06:45:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:28.246 06:45:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:28.246 06:45:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:28.246 06:45:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:28.246 06:45:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:28.246 06:45:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:28.246 06:45:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:28.246 06:45:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:28.246 06:45:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:28.246 06:45:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:28.246 06:45:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:28.246 06:45:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:28.246 06:45:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:28.246 06:45:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:28.246 06:45:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:28.246 06:45:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:28.246 06:45:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:28.246 06:45:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:28.246 06:45:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:28.246 06:45:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:28.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:20:28.246 00:20:28.246 --- 10.0.0.2 ping statistics --- 00:20:28.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.246 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:28.246 06:45:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:28.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:28.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:20:28.246 00:20:28.246 --- 10.0.0.3 ping statistics --- 00:20:28.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.246 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:28.246 06:45:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:28.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:28.246 00:20:28.246 --- 10.0.0.1 ping statistics --- 00:20:28.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.246 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:28.246 06:45:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.246 06:45:08 -- nvmf/common.sh@421 -- # return 0 00:20:28.246 06:45:08 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:20:28.246 06:45:08 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:28.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:29.070 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:20:29.070 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:20:29.070 06:45:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.070 06:45:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:29.070 06:45:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:29.070 06:45:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.070 06:45:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:29.070 06:45:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:29.070 06:45:08 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:20:29.070 06:45:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:29.070 06:45:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:29.070 06:45:08 -- common/autotest_common.sh@10 -- # set +x 00:20:29.070 06:45:08 -- nvmf/common.sh@469 -- # nvmfpid=87382 00:20:29.070 06:45:08 -- nvmf/common.sh@470 -- # waitforlisten 87382 00:20:29.070 06:45:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:29.070 06:45:08 -- common/autotest_common.sh@819 -- # '[' -z 87382 ']' 00:20:29.070 06:45:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.070 06:45:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:29.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.070 06:45:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.070 06:45:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:29.070 06:45:08 -- common/autotest_common.sh@10 -- # set +x 00:20:29.328 [2024-07-12 06:45:09.023358] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:29.328 [2024-07-12 06:45:09.023444] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.328 [2024-07-12 06:45:09.165881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.328 [2024-07-12 06:45:09.219471] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:29.328 [2024-07-12 06:45:09.219669] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.328 [2024-07-12 06:45:09.219699] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.328 [2024-07-12 06:45:09.219710] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.328 [2024-07-12 06:45:09.219932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.328 [2024-07-12 06:45:09.220081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.328 [2024-07-12 06:45:09.221337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.328 [2024-07-12 06:45:09.221344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.263 06:45:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:30.263 06:45:10 -- common/autotest_common.sh@852 -- # return 0 00:20:30.263 06:45:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:30.263 06:45:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:30.263 06:45:10 -- common/autotest_common.sh@10 -- # set +x 00:20:30.263 06:45:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.263 06:45:10 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:30.263 06:45:10 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:20:30.263 06:45:10 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:20:30.263 06:45:10 -- scripts/common.sh@311 -- # local bdf bdfs 00:20:30.263 06:45:10 -- scripts/common.sh@312 -- # local nvmes 00:20:30.263 06:45:10 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:20:30.263 06:45:10 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:30.263 06:45:10 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:20:30.263 06:45:10 -- scripts/common.sh@297 -- # local bdf= 00:20:30.263 06:45:10 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:20:30.263 06:45:10 -- scripts/common.sh@232 -- # local class 00:20:30.263 06:45:10 -- scripts/common.sh@233 -- # local subclass 00:20:30.263 06:45:10 -- scripts/common.sh@234 -- # local progif 00:20:30.263 06:45:10 -- scripts/common.sh@235 -- # printf %02x 1 00:20:30.263 06:45:10 -- scripts/common.sh@235 -- # class=01 00:20:30.263 06:45:10 -- scripts/common.sh@236 -- # printf %02x 8 00:20:30.263 06:45:10 -- scripts/common.sh@236 -- # subclass=08 00:20:30.263 06:45:10 -- scripts/common.sh@237 -- # printf %02x 2 00:20:30.263 06:45:10 -- scripts/common.sh@237 -- # progif=02 00:20:30.263 06:45:10 -- scripts/common.sh@239 -- # hash lspci 00:20:30.263 06:45:10 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:20:30.263 06:45:10 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:20:30.263 06:45:10 -- scripts/common.sh@242 -- # grep -i -- -p02 00:20:30.263 06:45:10 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:30.263 06:45:10 -- scripts/common.sh@244 -- # tr -d '"' 00:20:30.263 06:45:10 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:30.263 06:45:10 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:20:30.263 06:45:10 -- scripts/common.sh@15 -- # local i 00:20:30.263 06:45:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:20:30.263 06:45:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:30.263 06:45:10 -- scripts/common.sh@24 -- # return 0 00:20:30.263 06:45:10 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:20:30.263 06:45:10 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:30.263 06:45:10 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:20:30.263 06:45:10 -- scripts/common.sh@15 -- # local i 00:20:30.263 06:45:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:20:30.263 06:45:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:30.263 06:45:10 -- scripts/common.sh@24 -- # return 0 00:20:30.263 06:45:10 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:20:30.263 06:45:10 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:30.263 06:45:10 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:20:30.263 06:45:10 -- scripts/common.sh@322 -- # uname -s 00:20:30.263 06:45:10 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:30.263 06:45:10 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:30.263 06:45:10 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:30.263 06:45:10 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:20:30.263 06:45:10 -- scripts/common.sh@322 -- # uname -s 00:20:30.263 06:45:10 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:30.263 06:45:10 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:30.263 06:45:10 -- scripts/common.sh@327 -- # (( 2 )) 00:20:30.263 06:45:10 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:30.263 06:45:10 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:20:30.263 06:45:10 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:20:30.263 06:45:10 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:20:30.263 06:45:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:30.263 06:45:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:30.263 06:45:10 -- common/autotest_common.sh@10 -- # set +x 00:20:30.263 ************************************ 00:20:30.263 START TEST spdk_target_abort 00:20:30.263 ************************************ 00:20:30.263 06:45:10 -- common/autotest_common.sh@1104 -- # spdk_target 00:20:30.263 06:45:10 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:30.263 06:45:10 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:30.263 06:45:10 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:20:30.263 06:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.263 06:45:10 -- common/autotest_common.sh@10 -- # set +x 00:20:30.521 spdk_targetn1 00:20:30.521 06:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.521 06:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.521 06:45:10 -- common/autotest_common.sh@10 -- # set +x 00:20:30.521 [2024-07-12 06:45:10.247994] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.521 06:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:20:30.521 06:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.521 06:45:10 -- common/autotest_common.sh@10 -- # set +x 00:20:30.521 06:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:20:30.521 06:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.521 06:45:10 -- common/autotest_common.sh@10 -- # set +x 00:20:30.521 06:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:20:30.521 06:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.521 06:45:10 -- common/autotest_common.sh@10 -- # set +x 00:20:30.521 [2024-07-12 06:45:10.280158] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.521 06:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:30.521 06:45:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:30.522 06:45:10 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:33.798 Initializing NVMe Controllers 00:20:33.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:33.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:33.798 Initialization complete. Launching workers. 00:20:33.798 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10209, failed: 0 00:20:33.798 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1086, failed to submit 9123 00:20:33.798 success 832, unsuccess 254, failed 0 00:20:33.798 06:45:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:33.798 06:45:13 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:37.078 Initializing NVMe Controllers 00:20:37.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:37.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:37.078 Initialization complete. Launching workers. 00:20:37.078 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9000, failed: 0 00:20:37.078 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1164, failed to submit 7836 00:20:37.078 success 368, unsuccess 796, failed 0 00:20:37.078 06:45:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:37.078 06:45:16 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:40.447 Initializing NVMe Controllers 00:20:40.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:40.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:40.447 Initialization complete. Launching workers. 00:20:40.447 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31943, failed: 0 00:20:40.447 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2337, failed to submit 29606 00:20:40.447 success 428, unsuccess 1909, failed 0 00:20:40.447 06:45:20 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:20:40.447 06:45:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:40.447 06:45:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.447 06:45:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:40.447 06:45:20 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:40.447 06:45:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:40.447 06:45:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.706 06:45:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:40.706 06:45:20 -- target/abort_qd_sizes.sh@62 -- # killprocess 87382 00:20:40.706 06:45:20 -- common/autotest_common.sh@926 -- # '[' -z 87382 ']' 00:20:40.706 06:45:20 -- common/autotest_common.sh@930 -- # kill -0 87382 00:20:40.706 06:45:20 -- common/autotest_common.sh@931 -- # uname 00:20:40.706 06:45:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:40.706 06:45:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87382 00:20:40.706 06:45:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:40.706 06:45:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:40.706 killing process with pid 87382 00:20:40.706 06:45:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87382' 00:20:40.706 06:45:20 -- common/autotest_common.sh@945 -- # kill 87382 00:20:40.706 06:45:20 -- common/autotest_common.sh@950 -- # wait 87382 00:20:40.706 00:20:40.706 real 0m10.378s 00:20:40.706 user 0m42.501s 00:20:40.706 sys 0m2.197s 00:20:40.706 06:45:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:40.706 06:45:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.706 ************************************ 00:20:40.706 END TEST spdk_target_abort 00:20:40.706 ************************************ 00:20:40.706 06:45:20 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:20:40.706 06:45:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:40.706 06:45:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:40.706 06:45:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.706 ************************************ 00:20:40.706 START TEST kernel_target_abort 00:20:40.706 ************************************ 00:20:40.706 06:45:20 -- common/autotest_common.sh@1104 -- # kernel_target 00:20:40.706 06:45:20 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:20:40.706 06:45:20 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:20:40.706 06:45:20 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:20:40.706 06:45:20 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:20:40.706 06:45:20 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:20:40.706 06:45:20 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:40.706 06:45:20 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:40.706 06:45:20 -- nvmf/common.sh@627 -- # local block nvme 00:20:40.706 06:45:20 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:20:40.706 06:45:20 -- nvmf/common.sh@630 -- # modprobe nvmet 00:20:40.964 06:45:20 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:40.964 06:45:20 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:41.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:41.222 Waiting for block devices as requested 00:20:41.222 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:41.222 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:41.480 06:45:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:41.480 06:45:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:41.480 06:45:21 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:20:41.480 06:45:21 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:20:41.480 06:45:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:41.480 No valid GPT data, bailing 00:20:41.480 06:45:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:41.480 06:45:21 -- scripts/common.sh@393 -- # pt= 00:20:41.480 06:45:21 -- scripts/common.sh@394 -- # return 1 00:20:41.480 06:45:21 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:20:41.480 06:45:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:41.480 06:45:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:41.480 06:45:21 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:20:41.480 06:45:21 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:20:41.480 06:45:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:41.480 No valid GPT data, bailing 00:20:41.480 06:45:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:41.480 06:45:21 -- scripts/common.sh@393 -- # pt= 00:20:41.480 06:45:21 -- scripts/common.sh@394 -- # return 1 00:20:41.480 06:45:21 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:20:41.480 06:45:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:41.480 06:45:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:20:41.480 06:45:21 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:20:41.480 06:45:21 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:20:41.480 06:45:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:20:41.480 No valid GPT data, bailing 00:20:41.480 06:45:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:20:41.480 06:45:21 -- scripts/common.sh@393 -- # pt= 00:20:41.480 06:45:21 -- scripts/common.sh@394 -- # return 1 00:20:41.480 06:45:21 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:20:41.480 06:45:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:41.480 06:45:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:20:41.480 06:45:21 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:20:41.480 06:45:21 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:20:41.480 06:45:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:20:41.738 No valid GPT data, bailing 00:20:41.738 06:45:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:20:41.738 06:45:21 -- scripts/common.sh@393 -- # pt= 00:20:41.738 06:45:21 -- scripts/common.sh@394 -- # return 1 00:20:41.738 06:45:21 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:20:41.738 06:45:21 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:20:41.738 06:45:21 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:41.738 06:45:21 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:41.738 06:45:21 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:41.738 06:45:21 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:20:41.738 06:45:21 -- nvmf/common.sh@654 -- # echo 1 00:20:41.738 06:45:21 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:20:41.738 06:45:21 -- nvmf/common.sh@656 -- # echo 1 00:20:41.739 06:45:21 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:20:41.739 06:45:21 -- nvmf/common.sh@663 -- # echo tcp 00:20:41.739 06:45:21 -- nvmf/common.sh@664 -- # echo 4420 00:20:41.739 06:45:21 -- nvmf/common.sh@665 -- # echo ipv4 00:20:41.739 06:45:21 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:41.739 06:45:21 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b322988a-296a-4d08-987d-2f44d8098168 --hostid=b322988a-296a-4d08-987d-2f44d8098168 -a 10.0.0.1 -t tcp -s 4420 00:20:41.739 00:20:41.739 Discovery Log Number of Records 2, Generation counter 2 00:20:41.739 =====Discovery Log Entry 0====== 00:20:41.739 trtype: tcp 00:20:41.739 adrfam: ipv4 00:20:41.739 subtype: current discovery subsystem 00:20:41.739 treq: not specified, sq flow control disable supported 00:20:41.739 portid: 1 00:20:41.739 trsvcid: 4420 00:20:41.739 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:41.739 traddr: 10.0.0.1 00:20:41.739 eflags: none 00:20:41.739 sectype: none 00:20:41.739 =====Discovery Log Entry 1====== 00:20:41.739 trtype: tcp 00:20:41.739 adrfam: ipv4 00:20:41.739 subtype: nvme subsystem 00:20:41.739 treq: not specified, sq flow control disable supported 00:20:41.739 portid: 1 00:20:41.739 trsvcid: 4420 00:20:41.739 subnqn: kernel_target 00:20:41.739 traddr: 10.0.0.1 00:20:41.739 eflags: none 00:20:41.739 sectype: none 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:41.739 06:45:21 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:45.024 Initializing NVMe Controllers 00:20:45.024 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:45.024 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:45.024 Initialization complete. Launching workers. 00:20:45.024 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 29837, failed: 0 00:20:45.024 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29837, failed to submit 0 00:20:45.024 success 0, unsuccess 29837, failed 0 00:20:45.024 06:45:24 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:45.024 06:45:24 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:48.310 Initializing NVMe Controllers 00:20:48.310 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:48.310 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:48.310 Initialization complete. Launching workers. 00:20:48.310 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 62721, failed: 0 00:20:48.310 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26605, failed to submit 36116 00:20:48.310 success 0, unsuccess 26605, failed 0 00:20:48.310 06:45:27 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:48.310 06:45:27 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:51.604 Initializing NVMe Controllers 00:20:51.604 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:51.604 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:51.604 Initialization complete. Launching workers. 00:20:51.604 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 78853, failed: 0 00:20:51.604 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19682, failed to submit 59171 00:20:51.604 success 0, unsuccess 19682, failed 0 00:20:51.604 06:45:31 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:20:51.604 06:45:31 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:20:51.605 06:45:31 -- nvmf/common.sh@677 -- # echo 0 00:20:51.605 06:45:31 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:20:51.605 06:45:31 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:51.605 06:45:31 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:51.605 06:45:31 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:51.605 06:45:31 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:20:51.605 06:45:31 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:20:51.605 00:20:51.605 real 0m10.495s 00:20:51.605 user 0m5.792s 00:20:51.605 sys 0m2.159s 00:20:51.605 06:45:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:51.605 06:45:31 -- common/autotest_common.sh@10 -- # set +x 00:20:51.605 ************************************ 00:20:51.605 END TEST kernel_target_abort 00:20:51.605 ************************************ 00:20:51.605 06:45:31 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:20:51.605 06:45:31 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:20:51.605 06:45:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:51.605 06:45:31 -- nvmf/common.sh@116 -- # sync 00:20:51.605 06:45:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:51.605 06:45:31 -- nvmf/common.sh@119 -- # set +e 00:20:51.605 06:45:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:51.605 06:45:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:51.605 rmmod nvme_tcp 00:20:51.605 rmmod nvme_fabrics 00:20:51.605 rmmod nvme_keyring 00:20:51.605 06:45:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:51.605 06:45:31 -- nvmf/common.sh@123 -- # set -e 00:20:51.605 06:45:31 -- nvmf/common.sh@124 -- # return 0 00:20:51.605 06:45:31 -- nvmf/common.sh@477 -- # '[' -n 87382 ']' 00:20:51.605 06:45:31 -- nvmf/common.sh@478 -- # killprocess 87382 00:20:51.605 06:45:31 -- common/autotest_common.sh@926 -- # '[' -z 87382 ']' 00:20:51.605 06:45:31 -- common/autotest_common.sh@930 -- # kill -0 87382 00:20:51.605 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (87382) - No such process 00:20:51.605 Process with pid 87382 is not found 00:20:51.605 06:45:31 -- common/autotest_common.sh@953 -- # echo 'Process with pid 87382 is not found' 00:20:51.605 06:45:31 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:51.605 06:45:31 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:52.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:52.172 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:52.172 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:52.172 06:45:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:52.172 06:45:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:52.172 06:45:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.172 06:45:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:52.172 06:45:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.172 06:45:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:52.172 06:45:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.172 06:45:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:52.172 00:20:52.172 real 0m24.346s 00:20:52.172 user 0m49.697s 00:20:52.172 sys 0m5.659s 00:20:52.172 06:45:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:52.172 ************************************ 00:20:52.172 END TEST nvmf_abort_qd_sizes 00:20:52.172 06:45:32 -- common/autotest_common.sh@10 -- # set +x 00:20:52.172 ************************************ 00:20:52.172 06:45:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:52.172 06:45:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:52.172 06:45:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:52.172 06:45:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:52.172 06:45:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:52.172 06:45:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:52.172 06:45:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:52.172 06:45:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:52.172 06:45:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:52.172 06:45:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:52.172 06:45:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:52.172 06:45:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:52.172 06:45:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:52.172 06:45:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:52.172 06:45:32 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:20:52.172 06:45:32 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:20:52.172 06:45:32 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:20:52.172 06:45:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:52.172 06:45:32 -- common/autotest_common.sh@10 -- # set +x 00:20:52.172 06:45:32 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:20:52.172 06:45:32 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:20:52.172 06:45:32 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:20:52.172 06:45:32 -- common/autotest_common.sh@10 -- # set +x 00:20:54.080 INFO: APP EXITING 00:20:54.080 INFO: killing all VMs 00:20:54.080 INFO: killing vhost app 00:20:54.080 INFO: EXIT DONE 00:20:54.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:54.648 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:54.648 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:55.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:55.475 Cleaning 00:20:55.475 Removing: /var/run/dpdk/spdk0/config 00:20:55.475 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:55.475 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:55.475 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:55.475 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:55.476 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:55.476 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:55.476 Removing: /var/run/dpdk/spdk1/config 00:20:55.476 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:55.476 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:55.476 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:55.476 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:55.476 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:55.476 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:55.476 Removing: /var/run/dpdk/spdk2/config 00:20:55.476 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:55.476 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:55.476 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:55.476 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:55.476 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:55.476 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:55.476 Removing: /var/run/dpdk/spdk3/config 00:20:55.476 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:55.476 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:55.476 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:55.476 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:55.476 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:55.476 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:55.476 Removing: /var/run/dpdk/spdk4/config 00:20:55.476 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:55.476 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:55.476 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:55.476 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:55.476 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:55.476 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:55.476 Removing: /dev/shm/nvmf_trace.0 00:20:55.476 Removing: /dev/shm/spdk_tgt_trace.pid65812 00:20:55.476 Removing: /var/run/dpdk/spdk0 00:20:55.476 Removing: /var/run/dpdk/spdk1 00:20:55.476 Removing: /var/run/dpdk/spdk2 00:20:55.476 Removing: /var/run/dpdk/spdk3 00:20:55.476 Removing: /var/run/dpdk/spdk4 00:20:55.476 Removing: /var/run/dpdk/spdk_pid65673 00:20:55.476 Removing: /var/run/dpdk/spdk_pid65812 00:20:55.476 Removing: /var/run/dpdk/spdk_pid66049 00:20:55.476 Removing: /var/run/dpdk/spdk_pid66244 00:20:55.476 Removing: /var/run/dpdk/spdk_pid66379 00:20:55.476 Removing: /var/run/dpdk/spdk_pid66448 00:20:55.476 Removing: /var/run/dpdk/spdk_pid66522 00:20:55.476 Removing: /var/run/dpdk/spdk_pid66602 00:20:55.476 Removing: /var/run/dpdk/spdk_pid66678 00:20:55.476 Removing: /var/run/dpdk/spdk_pid66711 00:20:55.476 Removing: /var/run/dpdk/spdk_pid66741 00:20:55.476 Removing: /var/run/dpdk/spdk_pid66807 00:20:55.476 Removing: /var/run/dpdk/spdk_pid66888 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67318 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67370 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67421 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67437 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67493 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67509 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67577 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67593 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67633 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67653 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67693 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67711 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67833 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67869 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67937 00:20:55.476 Removing: /var/run/dpdk/spdk_pid67989 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68013 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68072 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68091 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68120 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68140 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68171 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68190 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68225 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68239 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68273 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68293 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68322 00:20:55.476 Removing: /var/run/dpdk/spdk_pid68340 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68376 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68390 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68424 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68444 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68473 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68487 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68527 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68541 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68576 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68590 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68624 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68644 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68673 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68692 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68727 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68741 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68770 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68795 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68824 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68838 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68878 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68895 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68927 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68954 00:20:55.735 Removing: /var/run/dpdk/spdk_pid68987 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69001 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69036 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69055 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69092 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69159 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69233 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69541 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69553 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69584 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69602 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69610 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69628 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69646 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69654 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69672 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69690 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69698 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69716 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69734 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69742 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69760 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69778 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69786 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69804 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69821 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69830 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69865 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69872 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69904 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69956 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69988 00:20:55.735 Removing: /var/run/dpdk/spdk_pid69992 00:20:55.735 Removing: /var/run/dpdk/spdk_pid70025 00:20:55.735 Removing: /var/run/dpdk/spdk_pid70030 00:20:55.735 Removing: /var/run/dpdk/spdk_pid70038 00:20:55.735 Removing: /var/run/dpdk/spdk_pid70078 00:20:55.735 Removing: /var/run/dpdk/spdk_pid70090 00:20:55.735 Removing: /var/run/dpdk/spdk_pid70116 00:20:55.735 Removing: /var/run/dpdk/spdk_pid70124 00:20:55.735 Removing: /var/run/dpdk/spdk_pid70126 00:20:55.735 Removing: /var/run/dpdk/spdk_pid70133 00:20:55.735 Removing: /var/run/dpdk/spdk_pid70141 00:20:55.735 Removing: /var/run/dpdk/spdk_pid70148 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70156 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70158 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70190 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70211 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70220 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70249 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70258 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70266 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70301 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70318 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70339 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70352 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70354 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70357 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70369 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70371 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70384 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70386 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70459 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70501 00:20:55.736 Removing: /var/run/dpdk/spdk_pid70599 00:20:55.996 Removing: /var/run/dpdk/spdk_pid70636 00:20:55.996 Removing: /var/run/dpdk/spdk_pid70675 00:20:55.996 Removing: /var/run/dpdk/spdk_pid70689 00:20:55.996 Removing: /var/run/dpdk/spdk_pid70704 00:20:55.996 Removing: /var/run/dpdk/spdk_pid70723 00:20:55.996 Removing: /var/run/dpdk/spdk_pid70753 00:20:55.996 Removing: /var/run/dpdk/spdk_pid70768 00:20:55.996 Removing: /var/run/dpdk/spdk_pid70830 00:20:55.996 Removing: /var/run/dpdk/spdk_pid70844 00:20:55.996 Removing: /var/run/dpdk/spdk_pid70893 00:20:55.996 Removing: /var/run/dpdk/spdk_pid70961 00:20:55.996 Removing: /var/run/dpdk/spdk_pid71013 00:20:55.996 Removing: /var/run/dpdk/spdk_pid71041 00:20:55.996 Removing: /var/run/dpdk/spdk_pid71126 00:20:55.996 Removing: /var/run/dpdk/spdk_pid71161 00:20:55.996 Removing: /var/run/dpdk/spdk_pid71198 00:20:55.996 Removing: /var/run/dpdk/spdk_pid71408 00:20:55.996 Removing: /var/run/dpdk/spdk_pid71500 00:20:55.996 Removing: /var/run/dpdk/spdk_pid71528 00:20:55.996 Removing: /var/run/dpdk/spdk_pid71845 00:20:55.996 Removing: /var/run/dpdk/spdk_pid71883 00:20:55.996 Removing: /var/run/dpdk/spdk_pid72188 00:20:55.996 Removing: /var/run/dpdk/spdk_pid72598 00:20:55.996 Removing: /var/run/dpdk/spdk_pid72866 00:20:55.996 Removing: /var/run/dpdk/spdk_pid73605 00:20:55.996 Removing: /var/run/dpdk/spdk_pid74424 00:20:55.996 Removing: /var/run/dpdk/spdk_pid74540 00:20:55.996 Removing: /var/run/dpdk/spdk_pid74608 00:20:55.996 Removing: /var/run/dpdk/spdk_pid75861 00:20:55.996 Removing: /var/run/dpdk/spdk_pid76066 00:20:55.996 Removing: /var/run/dpdk/spdk_pid76387 00:20:55.996 Removing: /var/run/dpdk/spdk_pid76499 00:20:55.996 Removing: /var/run/dpdk/spdk_pid76630 00:20:55.996 Removing: /var/run/dpdk/spdk_pid76657 00:20:55.996 Removing: /var/run/dpdk/spdk_pid76685 00:20:55.996 Removing: /var/run/dpdk/spdk_pid76711 00:20:55.996 Removing: /var/run/dpdk/spdk_pid76810 00:20:55.996 Removing: /var/run/dpdk/spdk_pid76944 00:20:55.996 Removing: /var/run/dpdk/spdk_pid77088 00:20:55.996 Removing: /var/run/dpdk/spdk_pid77162 00:20:55.996 Removing: /var/run/dpdk/spdk_pid77550 00:20:55.996 Removing: /var/run/dpdk/spdk_pid77890 00:20:55.996 Removing: /var/run/dpdk/spdk_pid77903 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80087 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80089 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80352 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80371 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80391 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80416 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80424 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80510 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80512 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80620 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80627 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80741 00:20:55.996 Removing: /var/run/dpdk/spdk_pid80743 00:20:55.996 Removing: /var/run/dpdk/spdk_pid81140 00:20:55.996 Removing: /var/run/dpdk/spdk_pid81183 00:20:55.996 Removing: /var/run/dpdk/spdk_pid81292 00:20:55.996 Removing: /var/run/dpdk/spdk_pid81370 00:20:55.996 Removing: /var/run/dpdk/spdk_pid81673 00:20:55.996 Removing: /var/run/dpdk/spdk_pid81875 00:20:55.996 Removing: /var/run/dpdk/spdk_pid82246 00:20:55.996 Removing: /var/run/dpdk/spdk_pid82777 00:20:55.996 Removing: /var/run/dpdk/spdk_pid83210 00:20:55.996 Removing: /var/run/dpdk/spdk_pid83265 00:20:55.996 Removing: /var/run/dpdk/spdk_pid83318 00:20:55.996 Removing: /var/run/dpdk/spdk_pid83369 00:20:55.996 Removing: /var/run/dpdk/spdk_pid83465 00:20:55.996 Removing: /var/run/dpdk/spdk_pid83525 00:20:55.996 Removing: /var/run/dpdk/spdk_pid83578 00:20:55.996 Removing: /var/run/dpdk/spdk_pid83625 00:20:55.996 Removing: /var/run/dpdk/spdk_pid83946 00:20:55.996 Removing: /var/run/dpdk/spdk_pid85118 00:20:55.996 Removing: /var/run/dpdk/spdk_pid85267 00:20:55.996 Removing: /var/run/dpdk/spdk_pid85509 00:20:55.996 Removing: /var/run/dpdk/spdk_pid86080 00:20:55.996 Removing: /var/run/dpdk/spdk_pid86243 00:20:55.996 Removing: /var/run/dpdk/spdk_pid86404 00:20:55.996 Removing: /var/run/dpdk/spdk_pid86501 00:20:55.996 Removing: /var/run/dpdk/spdk_pid86671 00:20:55.996 Removing: /var/run/dpdk/spdk_pid86780 00:20:55.996 Removing: /var/run/dpdk/spdk_pid87434 00:20:55.996 Removing: /var/run/dpdk/spdk_pid87470 00:20:55.996 Removing: /var/run/dpdk/spdk_pid87505 00:20:55.996 Removing: /var/run/dpdk/spdk_pid87749 00:20:55.996 Removing: /var/run/dpdk/spdk_pid87790 00:20:55.996 Removing: /var/run/dpdk/spdk_pid87820 00:20:56.255 Clean 00:20:56.255 killing process with pid 59993 00:20:56.255 killing process with pid 59995 00:20:56.255 06:45:36 -- common/autotest_common.sh@1436 -- # return 0 00:20:56.255 06:45:36 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:20:56.255 06:45:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:56.255 06:45:36 -- common/autotest_common.sh@10 -- # set +x 00:20:56.255 06:45:36 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:20:56.255 06:45:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:56.255 06:45:36 -- common/autotest_common.sh@10 -- # set +x 00:20:56.255 06:45:36 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:56.255 06:45:36 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:56.255 06:45:36 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:56.255 06:45:36 -- spdk/autotest.sh@394 -- # hash lcov 00:20:56.255 06:45:36 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:20:56.255 06:45:36 -- spdk/autotest.sh@396 -- # hostname 00:20:56.255 06:45:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:56.514 geninfo: WARNING: invalid characters removed from testname! 00:21:28.611 06:46:03 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:28.611 06:46:07 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:30.516 06:46:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:33.798 06:46:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:36.329 06:46:15 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:39.616 06:46:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:42.179 06:46:21 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:42.179 06:46:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:42.179 06:46:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:42.179 06:46:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.179 06:46:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.179 06:46:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.179 06:46:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.179 06:46:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.179 06:46:21 -- paths/export.sh@5 -- $ export PATH 00:21:42.179 06:46:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.179 06:46:21 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:42.179 06:46:21 -- common/autobuild_common.sh@435 -- $ date +%s 00:21:42.179 06:46:21 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720766781.XXXXXX 00:21:42.179 06:46:21 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720766781.ZR8TkJ 00:21:42.179 06:46:21 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:21:42.179 06:46:21 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:21:42.179 06:46:21 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:21:42.179 06:46:21 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:21:42.179 06:46:21 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:42.179 06:46:21 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:42.179 06:46:21 -- common/autobuild_common.sh@451 -- $ get_config_params 00:21:42.179 06:46:21 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:21:42.179 06:46:21 -- common/autotest_common.sh@10 -- $ set +x 00:21:42.179 06:46:21 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:21:42.179 06:46:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:42.179 06:46:21 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:42.179 06:46:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:42.179 06:46:21 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:21:42.180 06:46:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:42.180 06:46:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:42.180 06:46:21 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:42.180 06:46:21 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:42.180 06:46:21 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:42.180 06:46:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:42.180 + [[ -n 5859 ]] 00:21:42.180 + sudo kill 5859 00:21:42.189 [Pipeline] } 00:21:42.214 [Pipeline] // timeout 00:21:42.219 [Pipeline] } 00:21:42.238 [Pipeline] // stage 00:21:42.244 [Pipeline] } 00:21:42.260 [Pipeline] // catchError 00:21:42.270 [Pipeline] stage 00:21:42.272 [Pipeline] { (Stop VM) 00:21:42.286 [Pipeline] sh 00:21:42.567 + vagrant halt 00:21:46.760 ==> default: Halting domain... 00:21:53.332 [Pipeline] sh 00:21:53.606 + vagrant destroy -f 00:21:57.793 ==> default: Removing domain... 00:21:57.806 [Pipeline] sh 00:21:58.156 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:58.168 [Pipeline] } 00:21:58.191 [Pipeline] // stage 00:21:58.197 [Pipeline] } 00:21:58.214 [Pipeline] // dir 00:21:58.221 [Pipeline] } 00:21:58.242 [Pipeline] // wrap 00:21:58.249 [Pipeline] } 00:21:58.266 [Pipeline] // catchError 00:21:58.275 [Pipeline] stage 00:21:58.277 [Pipeline] { (Epilogue) 00:21:58.293 [Pipeline] sh 00:21:58.570 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:05.146 [Pipeline] catchError 00:22:05.148 [Pipeline] { 00:22:05.162 [Pipeline] sh 00:22:05.442 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:05.702 Artifacts sizes are good 00:22:05.712 [Pipeline] } 00:22:05.730 [Pipeline] // catchError 00:22:05.741 [Pipeline] archiveArtifacts 00:22:05.749 Archiving artifacts 00:22:05.917 [Pipeline] cleanWs 00:22:05.929 [WS-CLEANUP] Deleting project workspace... 00:22:05.929 [WS-CLEANUP] Deferred wipeout is used... 00:22:05.936 [WS-CLEANUP] done 00:22:05.937 [Pipeline] } 00:22:05.956 [Pipeline] // stage 00:22:05.962 [Pipeline] } 00:22:05.979 [Pipeline] // node 00:22:05.986 [Pipeline] End of Pipeline 00:22:06.034 Finished: SUCCESS